Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1477127 Details for
Bug 1619212
set_fact rule_name before luminous: 'dict object' has no attribute u'dummy'
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
/var/lib/mistral/config-download-latest/ansible.log
ansible.log (text/plain), 3.63 MB, created by
Filip Hubík
on 2018-08-20 11:05:32 UTC
(
hide
)
Description:
/var/lib/mistral/config-download-latest/ansible.log
Filename:
MIME Type:
Creator:
Filip Hubík
Created:
2018-08-20 11:05:32 UTC
Size:
3.63 MB
patch
obsolete
>2018-08-20 06:19:17,346 p=1013 u=mistral | Using /var/lib/mistral/overcloud/ansible.cfg as config file >2018-08-20 06:19:17,393 p=1013 u=mistral | [WARNING]: Could not match supplied host pattern, ignoring: > >2018-08-20 06:19:18,031 p=1013 u=mistral | PLAY [Gather facts from undercloud] ******************************************** >2018-08-20 06:19:18,043 p=1013 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-08-20 06:19:18,044 p=1013 u=mistral | Monday 20 August 2018 06:19:18 -0400 (0:00:00.074) 0:00:00.074 ********* >2018-08-20 06:19:29,773 p=1013 u=mistral | ok: [undercloud] >2018-08-20 06:19:29,789 p=1013 u=mistral | PLAY [Gather facts from overcloud] ********************************************* >2018-08-20 06:19:29,798 p=1013 u=mistral | TASK [Gathering Facts] ********************************************************* >2018-08-20 06:19:29,799 p=1013 u=mistral | Monday 20 August 2018 06:19:29 -0400 (0:00:11.754) 0:00:11.828 ********* >2018-08-20 06:19:33,692 p=1013 u=mistral | ok: [compute-0] >2018-08-20 06:19:33,862 p=1013 u=mistral | ok: [ceph-0] >2018-08-20 06:19:33,989 p=1013 u=mistral | ok: [controller-0] >2018-08-20 06:19:34,014 p=1013 u=mistral | PLAY [Load global variables] *************************************************** >2018-08-20 06:19:34,034 p=1013 u=mistral | TASK [include_vars] ************************************************************ >2018-08-20 06:19:34,035 p=1013 u=mistral | Monday 20 August 2018 06:19:34 -0400 (0:00:04.235) 0:00:16.064 ********* >2018-08-20 06:19:34,081 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.10,ceph-0.localdomain,ceph-0,172.17.3.10,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.16,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.16,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.16,ceph-0.external.localdomain,ceph-0.external,192.168.24.16,ceph-0.management.localdomain,ceph-0.management,192.168.24.16,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.25,compute-0.localdomain,compute-0,172.17.3.28,compute-0.storage.localdomain,compute-0.storage,192.168.24.13,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.25,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.19,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.13,compute-0.external.localdomain,compute-0.external,192.168.24.13,compute-0.management.localdomain,compute-0.management,192.168.24.13,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.14,controller-0.storage.localdomain,controller-0.storage,172.17.4.12,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.26,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.105,controller-0.external.localdomain,controller-0.external,192.168.24.12,controller-0.management.localdomain,controller-0.management,192.168.24.12,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-08-20 06:19:34,110 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.10,ceph-0.localdomain,ceph-0,172.17.3.10,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.16,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.16,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.16,ceph-0.external.localdomain,ceph-0.external,192.168.24.16,ceph-0.management.localdomain,ceph-0.management,192.168.24.16,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.25,compute-0.localdomain,compute-0,172.17.3.28,compute-0.storage.localdomain,compute-0.storage,192.168.24.13,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.25,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.19,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.13,compute-0.external.localdomain,compute-0.external,192.168.24.13,compute-0.management.localdomain,compute-0.management,192.168.24.13,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.14,controller-0.storage.localdomain,controller-0.storage,172.17.4.12,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.26,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.105,controller-0.external.localdomain,controller-0.external,192.168.24.12,controller-0.management.localdomain,controller-0.management,192.168.24.12,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-08-20 06:19:34,113 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.10,ceph-0.localdomain,ceph-0,172.17.3.10,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.16,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.16,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.16,ceph-0.external.localdomain,ceph-0.external,192.168.24.16,ceph-0.management.localdomain,ceph-0.management,192.168.24.16,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.25,compute-0.localdomain,compute-0,172.17.3.28,compute-0.storage.localdomain,compute-0.storage,192.168.24.13,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.25,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.19,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.13,compute-0.external.localdomain,compute-0.external,192.168.24.13,compute-0.management.localdomain,compute-0.management,192.168.24.13,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.14,controller-0.storage.localdomain,controller-0.storage,172.17.4.12,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.26,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.105,controller-0.external.localdomain,controller-0.external,192.168.24.12,controller-0.management.localdomain,controller-0.management,192.168.24.12,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-08-20 06:19:34,145 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deploy_steps_max": 6, "ssh_known_hosts": {"ceph-0": "172.17.3.10,ceph-0.localdomain,ceph-0,172.17.3.10,ceph-0.storage.localdomain,ceph-0.storage,172.17.4.10,ceph-0.storagemgmt.localdomain,ceph-0.storagemgmt,192.168.24.16,ceph-0.internalapi.localdomain,ceph-0.internalapi,192.168.24.16,ceph-0.tenant.localdomain,ceph-0.tenant,192.168.24.16,ceph-0.external.localdomain,ceph-0.external,192.168.24.16,ceph-0.management.localdomain,ceph-0.management,192.168.24.16,ceph-0.ctlplane.localdomain,ceph-0.ctlplane", "compute-0": "172.17.1.25,compute-0.localdomain,compute-0,172.17.3.28,compute-0.storage.localdomain,compute-0.storage,192.168.24.13,compute-0.storagemgmt.localdomain,compute-0.storagemgmt,172.17.1.25,compute-0.internalapi.localdomain,compute-0.internalapi,172.17.2.19,compute-0.tenant.localdomain,compute-0.tenant,192.168.24.13,compute-0.external.localdomain,compute-0.external,192.168.24.13,compute-0.management.localdomain,compute-0.management,192.168.24.13,compute-0.ctlplane.localdomain,compute-0.ctlplane", "controller-0": "172.17.1.16,controller-0.localdomain,controller-0,172.17.3.14,controller-0.storage.localdomain,controller-0.storage,172.17.4.12,controller-0.storagemgmt.localdomain,controller-0.storagemgmt,172.17.1.16,controller-0.internalapi.localdomain,controller-0.internalapi,172.17.2.26,controller-0.tenant.localdomain,controller-0.tenant,10.0.0.105,controller-0.external.localdomain,controller-0.external,192.168.24.12,controller-0.management.localdomain,controller-0.management,192.168.24.12,controller-0.ctlplane.localdomain,controller-0.ctlplane"}}, "ansible_included_var_files": ["/var/lib/mistral/overcloud/global_vars.yaml"], "changed": false} >2018-08-20 06:19:34,151 p=1013 u=mistral | PLAY [Common roles for TripleO servers] **************************************** >2018-08-20 06:19:34,171 p=1013 u=mistral | TASK [tripleo-bootstrap : Deploy required packages to bootstrap TripleO] ******* >2018-08-20 06:19:34,171 p=1013 u=mistral | Monday 20 August 2018 06:19:34 -0400 (0:00:00.136) 0:00:16.201 ********* >2018-08-20 06:19:35,013 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180709100740.fdd6a5f.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-08-20 06:19:35,020 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180709100740.fdd6a5f.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-08-20 06:19:35,032 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["openstack-heat-agents-1.6.1-0.20180709100740.fdd6a5f.el7ost.noarch providing openstack-heat-agents is already installed", "jq-1.3-4.el7ost.x86_64 providing jq is already installed"]} >2018-08-20 06:19:35,050 p=1013 u=mistral | TASK [tripleo-bootstrap : Create /var/lib/heat-config/tripleo-config-download directory for deployment data] *** >2018-08-20 06:19:35,050 p=1013 u=mistral | Monday 20 August 2018 06:19:35 -0400 (0:00:00.879) 0:00:17.080 ********* >2018-08-20 06:19:35,414 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:19:35,418 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:19:35,419 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/heat-config/tripleo-config-download", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:19:35,439 p=1013 u=mistral | TASK [tripleo-ssh-known-hosts : Template /etc/ssh/ssh_known_hosts] ************* >2018-08-20 06:19:35,439 p=1013 u=mistral | Monday 20 August 2018 06:19:35 -0400 (0:00:00.388) 0:00:17.469 ********* >2018-08-20 06:19:36,406 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "6a36aa3fb615441676e86a94beb7f6e758a6a114", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "4757c7fbf4a82bf89a295a308c62926c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 2628, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760375.54-64334061987456/source", "state": "file", "uid": 0} >2018-08-20 06:19:36,407 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "6a36aa3fb615441676e86a94beb7f6e758a6a114", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "4757c7fbf4a82bf89a295a308c62926c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 2628, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760375.47-4688743445260/source", "state": "file", "uid": 0} >2018-08-20 06:19:36,413 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "6a36aa3fb615441676e86a94beb7f6e758a6a114", "dest": "/etc/ssh/ssh_known_hosts", "gid": 0, "group": "root", "md5sum": "4757c7fbf4a82bf89a295a308c62926c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 2628, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760375.52-65057495250698/source", "state": "file", "uid": 0} >2018-08-20 06:19:36,420 p=1013 u=mistral | PLAY [Overcloud deploy step tasks for step 0] ********************************** >2018-08-20 06:19:36,428 p=1013 u=mistral | PLAY [Server deployments] ****************************************************** >2018-08-20 06:19:36,450 p=1013 u=mistral | TASK [include_tasks] *********************************************************** >2018-08-20 06:19:36,451 p=1013 u=mistral | Monday 20 August 2018 06:19:36 -0400 (0:00:01.011) 0:00:18.481 ********* >2018-08-20 06:19:36,694 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Controller/deployments.yaml for controller-0 >2018-08-20 06:19:36,703 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Controller/deployments.yaml for controller-0 >2018-08-20 06:19:36,711 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Controller/deployments.yaml for controller-0 >2018-08-20 06:19:36,720 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Controller/deployments.yaml for controller-0 >2018-08-20 06:19:36,728 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Controller/deployments.yaml for controller-0 >2018-08-20 06:19:36,735 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Controller/deployments.yaml for controller-0 >2018-08-20 06:19:36,743 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Controller/deployments.yaml for controller-0 >2018-08-20 06:19:36,751 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Controller/deployments.yaml for controller-0 >2018-08-20 06:19:36,759 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Controller/deployments.yaml for controller-0 >2018-08-20 06:19:36,782 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:19:36,782 p=1013 u=mistral | Monday 20 August 2018 06:19:36 -0400 (0:00:00.331) 0:00:18.812 ********* >2018-08-20 06:19:36,890 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "943be372-58ce-439a-990b-59072a0c70d1"}, "changed": false} >2018-08-20 06:19:36,913 p=1013 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-08-20 06:19:36,913 p=1013 u=mistral | Monday 20 August 2018 06:19:36 -0400 (0:00:00.131) 0:00:18.943 ********* >2018-08-20 06:19:37,508 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "957c8a2e45b93861f3dc0dbc21a8f2ef2f54157a", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-943be372-58ce-439a-990b-59072a0c70d1", "gid": 0, "group": "root", "md5sum": "8dbafe621898cb54a0f073ae024b008e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 10198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760377.02-100413171634819/source", "state": "file", "uid": 0} >2018-08-20 06:19:37,533 p=1013 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-08-20 06:19:37,533 p=1013 u=mistral | Monday 20 August 2018 06:19:37 -0400 (0:00:00.619) 0:00:19.563 ********* >2018-08-20 06:19:37,779 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:19:37,806 p=1013 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-08-20 06:19:37,807 p=1013 u=mistral | Monday 20 August 2018 06:19:37 -0400 (0:00:00.273) 0:00:19.837 ********* >2018-08-20 06:19:37,826 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:19:37,852 p=1013 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-08-20 06:19:37,852 p=1013 u=mistral | Monday 20 August 2018 06:19:37 -0400 (0:00:00.045) 0:00:19.882 ********* >2018-08-20 06:19:37,872 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:19:37,899 p=1013 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-08-20 06:19:37,899 p=1013 u=mistral | Monday 20 August 2018 06:19:37 -0400 (0:00:00.047) 0:00:19.929 ********* >2018-08-20 06:19:37,917 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:19:37,944 p=1013 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-08-20 06:19:37,944 p=1013 u=mistral | Monday 20 August 2018 06:19:37 -0400 (0:00:00.044) 0:00:19.974 ********* >2018-08-20 06:20:07,276 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.notify.json)", "delta": "0:00:29.006056", "end": "2018-08-20 06:20:07.244826", "rc": 0, "start": "2018-08-20 06:19:38.238770", "stderr": "[2018-08-20 06:19:38,268] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.json\n[2018-08-20 06:20:06,780] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.105/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.105/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/08/20 06:19:38 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/08/20 06:19:38 AM] [INFO] Ifcfg net config provider created.\\n[2018/08/20 06:19:38 AM] [INFO] Not using any mapping file.\\n[2018/08/20 06:19:39 AM] [INFO] Finding active nics\\n[2018/08/20 06:19:39 AM] [INFO] eth1 is an embedded active nic\\n[2018/08/20 06:19:39 AM] [INFO] eth0 is an embedded active nic\\n[2018/08/20 06:19:39 AM] [INFO] eth2 is an embedded active nic\\n[2018/08/20 06:19:39 AM] [INFO] lo is not an active nic\\n[2018/08/20 06:19:39 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/08/20 06:19:39 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/08/20 06:19:39 AM] [INFO] nic3 mapped to: eth2\\n[2018/08/20 06:19:39 AM] [INFO] nic2 mapped to: eth1\\n[2018/08/20 06:19:39 AM] [INFO] nic1 mapped to: eth0\\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth0\\n[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: eth0\\n[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth1\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan20\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan30\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan40\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan50\\n[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-ex\\n[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: br-ex\\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth2\\n[2018/08/20 06:19:39 AM] [INFO] applying network configs...\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth2\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth1\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth0\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-ex\\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth2\\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth1\\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth0\\n[2018/08/20 06:19:48 AM] [INFO] running ifup on interface: vlan50\\n[2018/08/20 06:19:52 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:19:57 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:01 AM] [INFO] running ifup on interface: vlan40\\n[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan40\\n[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:06,780] (heat-config) [DEBUG] [2018-08-20 06:19:38,294] (heat-config) [INFO] interface_name=nic1\n[2018-08-20 06:19:38,294] (heat-config) [INFO] bridge_name=br-ex\n[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752\n[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6im6lhmnf4vv-0-l2yz7fnpvivx-NetworkDeployment-rgkhr2c3gwkp-TripleOSoftwareDeployment-bq4jtjwpxq3v/a48ff5e1-9660-4c53-8aa6-a8cb2a46a486\n[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:19:38,295] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/943be372-58ce-439a-990b-59072a0c70d1\n[2018-08-20 06:20:06,775] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-08-20 06:20:06,776] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.105/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.105/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/08/20 06:19:38 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/08/20 06:19:38 AM] [INFO] Ifcfg net config provider created.\n[2018/08/20 06:19:38 AM] [INFO] Not using any mapping file.\n[2018/08/20 06:19:39 AM] [INFO] Finding active nics\n[2018/08/20 06:19:39 AM] [INFO] eth1 is an embedded active nic\n[2018/08/20 06:19:39 AM] [INFO] eth0 is an embedded active nic\n[2018/08/20 06:19:39 AM] [INFO] eth2 is an embedded active nic\n[2018/08/20 06:19:39 AM] [INFO] lo is not an active nic\n[2018/08/20 06:19:39 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/08/20 06:19:39 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/08/20 06:19:39 AM] [INFO] nic3 mapped to: eth2\n[2018/08/20 06:19:39 AM] [INFO] nic2 mapped to: eth1\n[2018/08/20 06:19:39 AM] [INFO] nic1 mapped to: eth0\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth0\n[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: eth0\n[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-isolated\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth1\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan20\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan30\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan40\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan50\n[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-ex\n[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: br-ex\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth2\n[2018/08/20 06:19:39 AM] [INFO] applying network configs...\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth2\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth1\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth0\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-ex\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-isolated\n[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-ex\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth2\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth1\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth0\n[2018/08/20 06:19:48 AM] [INFO] running ifup on interface: vlan50\n[2018/08/20 06:19:52 AM] [INFO] running ifup on interface: vlan20\n[2018/08/20 06:19:57 AM] [INFO] running ifup on interface: vlan30\n[2018/08/20 06:20:01 AM] [INFO] running ifup on interface: vlan40\n[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan20\n[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan30\n[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan40\n[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-08-20 06:20:06,776] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/943be372-58ce-439a-990b-59072a0c70d1\n\n[2018-08-20 06:20:06,780] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:06,781] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.json < /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.notify.json\n[2018-08-20 06:20:07,237] (heat-config) [INFO] \n[2018-08-20 06:20:07,237] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:19:38,268] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.json", "[2018-08-20 06:20:06,780] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.105/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.105/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/08/20 06:19:38 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/08/20 06:19:38 AM] [INFO] Ifcfg net config provider created.\\n[2018/08/20 06:19:38 AM] [INFO] Not using any mapping file.\\n[2018/08/20 06:19:39 AM] [INFO] Finding active nics\\n[2018/08/20 06:19:39 AM] [INFO] eth1 is an embedded active nic\\n[2018/08/20 06:19:39 AM] [INFO] eth0 is an embedded active nic\\n[2018/08/20 06:19:39 AM] [INFO] eth2 is an embedded active nic\\n[2018/08/20 06:19:39 AM] [INFO] lo is not an active nic\\n[2018/08/20 06:19:39 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/08/20 06:19:39 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/08/20 06:19:39 AM] [INFO] nic3 mapped to: eth2\\n[2018/08/20 06:19:39 AM] [INFO] nic2 mapped to: eth1\\n[2018/08/20 06:19:39 AM] [INFO] nic1 mapped to: eth0\\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth0\\n[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: eth0\\n[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth1\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan20\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan30\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan40\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan50\\n[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-ex\\n[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: br-ex\\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth2\\n[2018/08/20 06:19:39 AM] [INFO] applying network configs...\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth2\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth1\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth0\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-ex\\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth2\\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth1\\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth0\\n[2018/08/20 06:19:48 AM] [INFO] running ifup on interface: vlan50\\n[2018/08/20 06:19:52 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:19:57 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:01 AM] [INFO] running ifup on interface: vlan40\\n[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan40\\n[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:06,780] (heat-config) [DEBUG] [2018-08-20 06:19:38,294] (heat-config) [INFO] interface_name=nic1", "[2018-08-20 06:19:38,294] (heat-config) [INFO] bridge_name=br-ex", "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6im6lhmnf4vv-0-l2yz7fnpvivx-NetworkDeployment-rgkhr2c3gwkp-TripleOSoftwareDeployment-bq4jtjwpxq3v/a48ff5e1-9660-4c53-8aa6-a8cb2a46a486", "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:19:38,295] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/943be372-58ce-439a-990b-59072a0c70d1", "[2018-08-20 06:20:06,775] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-08-20 06:20:06,776] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.105/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.105/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/08/20 06:19:38 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/08/20 06:19:38 AM] [INFO] Ifcfg net config provider created.", "[2018/08/20 06:19:38 AM] [INFO] Not using any mapping file.", "[2018/08/20 06:19:39 AM] [INFO] Finding active nics", "[2018/08/20 06:19:39 AM] [INFO] eth1 is an embedded active nic", "[2018/08/20 06:19:39 AM] [INFO] eth0 is an embedded active nic", "[2018/08/20 06:19:39 AM] [INFO] eth2 is an embedded active nic", "[2018/08/20 06:19:39 AM] [INFO] lo is not an active nic", "[2018/08/20 06:19:39 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/08/20 06:19:39 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/08/20 06:19:39 AM] [INFO] nic3 mapped to: eth2", "[2018/08/20 06:19:39 AM] [INFO] nic2 mapped to: eth1", "[2018/08/20 06:19:39 AM] [INFO] nic1 mapped to: eth0", "[2018/08/20 06:19:39 AM] [INFO] adding interface: eth0", "[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: eth0", "[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-isolated", "[2018/08/20 06:19:39 AM] [INFO] adding interface: eth1", "[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan20", "[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan30", "[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan40", "[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan50", "[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-ex", "[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: br-ex", "[2018/08/20 06:19:39 AM] [INFO] adding interface: eth2", "[2018/08/20 06:19:39 AM] [INFO] applying network configs...", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth2", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth1", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth0", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-ex", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-isolated", "[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-ex", "[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth2", "[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth1", "[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth0", "[2018/08/20 06:19:48 AM] [INFO] running ifup on interface: vlan50", "[2018/08/20 06:19:52 AM] [INFO] running ifup on interface: vlan20", "[2018/08/20 06:19:57 AM] [INFO] running ifup on interface: vlan30", "[2018/08/20 06:20:01 AM] [INFO] running ifup on interface: vlan40", "[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan20", "[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan30", "[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan40", "[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-08-20 06:20:06,776] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/943be372-58ce-439a-990b-59072a0c70d1", "", "[2018-08-20 06:20:06,780] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:06,781] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.json < /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.notify.json", "[2018-08-20 06:20:07,237] (heat-config) [INFO] ", "[2018-08-20 06:20:07,237] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:07,305 p=1013 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-08-20 06:20:07,305 p=1013 u=mistral | Monday 20 August 2018 06:20:07 -0400 (0:00:29.361) 0:00:49.335 ********* >2018-08-20 06:20:07,362 p=1013 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:19:38,268] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.json", > "[2018-08-20 06:20:06,780] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.105/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.12/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.16/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.14/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.12/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.26/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"10.0.0.105/24\\\"}], \\\"members\\\": [{\\\"name\\\": \\\"nic3\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}], \\\"name\\\": \\\"bridge_name\\\", \\\"routes\\\": [{\\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"10.0.0.1\\\"}], \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/08/20 06:19:38 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/08/20 06:19:38 AM] [INFO] Ifcfg net config provider created.\\n[2018/08/20 06:19:38 AM] [INFO] Not using any mapping file.\\n[2018/08/20 06:19:39 AM] [INFO] Finding active nics\\n[2018/08/20 06:19:39 AM] [INFO] eth1 is an embedded active nic\\n[2018/08/20 06:19:39 AM] [INFO] eth0 is an embedded active nic\\n[2018/08/20 06:19:39 AM] [INFO] eth2 is an embedded active nic\\n[2018/08/20 06:19:39 AM] [INFO] lo is not an active nic\\n[2018/08/20 06:19:39 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/08/20 06:19:39 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/08/20 06:19:39 AM] [INFO] nic3 mapped to: eth2\\n[2018/08/20 06:19:39 AM] [INFO] nic2 mapped to: eth1\\n[2018/08/20 06:19:39 AM] [INFO] nic1 mapped to: eth0\\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth0\\n[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: eth0\\n[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth1\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan20\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan30\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan40\\n[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan50\\n[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-ex\\n[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: br-ex\\n[2018/08/20 06:19:39 AM] [INFO] adding interface: eth2\\n[2018/08/20 06:19:39 AM] [INFO] applying network configs...\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth2\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth1\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth0\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-ex\\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth2\\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth1\\n[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth0\\n[2018/08/20 06:19:48 AM] [INFO] running ifup on interface: vlan50\\n[2018/08/20 06:19:52 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:19:57 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:01 AM] [INFO] running ifup on interface: vlan40\\n[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan40\\n[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:06,780] (heat-config) [DEBUG] [2018-08-20 06:19:38,294] (heat-config) [INFO] interface_name=nic1", > "[2018-08-20 06:19:38,294] (heat-config) [INFO] bridge_name=br-ex", > "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", > "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6im6lhmnf4vv-0-l2yz7fnpvivx-NetworkDeployment-rgkhr2c3gwkp-TripleOSoftwareDeployment-bq4jtjwpxq3v/a48ff5e1-9660-4c53-8aa6-a8cb2a46a486", > "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:19:38,294] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:19:38,295] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/943be372-58ce-439a-990b-59072a0c70d1", > "[2018-08-20 06:20:06,775] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-08-20 06:20:06,776] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.105/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.12/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.16/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.14/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.12/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.26/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"10.0.0.105/24\"}], \"members\": [{\"name\": \"nic3\", \"primary\": true, \"type\": \"interface\"}], \"name\": \"bridge_name\", \"routes\": [{\"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"10.0.0.1\"}], \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/08/20 06:19:38 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/08/20 06:19:38 AM] [INFO] Ifcfg net config provider created.", > "[2018/08/20 06:19:38 AM] [INFO] Not using any mapping file.", > "[2018/08/20 06:19:39 AM] [INFO] Finding active nics", > "[2018/08/20 06:19:39 AM] [INFO] eth1 is an embedded active nic", > "[2018/08/20 06:19:39 AM] [INFO] eth0 is an embedded active nic", > "[2018/08/20 06:19:39 AM] [INFO] eth2 is an embedded active nic", > "[2018/08/20 06:19:39 AM] [INFO] lo is not an active nic", > "[2018/08/20 06:19:39 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/08/20 06:19:39 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/08/20 06:19:39 AM] [INFO] nic3 mapped to: eth2", > "[2018/08/20 06:19:39 AM] [INFO] nic2 mapped to: eth1", > "[2018/08/20 06:19:39 AM] [INFO] nic1 mapped to: eth0", > "[2018/08/20 06:19:39 AM] [INFO] adding interface: eth0", > "[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: eth0", > "[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-isolated", > "[2018/08/20 06:19:39 AM] [INFO] adding interface: eth1", > "[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan20", > "[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan30", > "[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan40", > "[2018/08/20 06:19:39 AM] [INFO] adding vlan: vlan50", > "[2018/08/20 06:19:39 AM] [INFO] adding bridge: br-ex", > "[2018/08/20 06:19:39 AM] [INFO] adding custom route for interface: br-ex", > "[2018/08/20 06:19:39 AM] [INFO] adding interface: eth2", > "[2018/08/20 06:19:39 AM] [INFO] applying network configs...", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth2", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth1", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: eth0", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan50", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan20", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan30", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on interface: vlan40", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/08/20 06:19:39 AM] [INFO] running ifdown on bridge: br-ex", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-ex", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-ex", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-ex", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/08/20 06:19:39 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/08/20 06:19:39 AM] [INFO] running ifup on bridge: br-ex", > "[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth2", > "[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth1", > "[2018/08/20 06:19:44 AM] [INFO] running ifup on interface: eth0", > "[2018/08/20 06:19:48 AM] [INFO] running ifup on interface: vlan50", > "[2018/08/20 06:19:52 AM] [INFO] running ifup on interface: vlan20", > "[2018/08/20 06:19:57 AM] [INFO] running ifup on interface: vlan30", > "[2018/08/20 06:20:01 AM] [INFO] running ifup on interface: vlan40", > "[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan20", > "[2018/08/20 06:20:05 AM] [INFO] running ifup on interface: vlan30", > "[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan40", > "[2018/08/20 06:20:06 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-08-20 06:20:06,776] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/943be372-58ce-439a-990b-59072a0c70d1", > "", > "[2018-08-20 06:20:06,780] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:06,781] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.json < /var/lib/heat-config/deployed/943be372-58ce-439a-990b-59072a0c70d1.notify.json", > "[2018-08-20 06:20:07,237] (heat-config) [INFO] ", > "[2018-08-20 06:20:07,237] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:07,388 p=1013 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-08-20 06:20:07,388 p=1013 u=mistral | Monday 20 August 2018 06:20:07 -0400 (0:00:00.082) 0:00:49.418 ********* >2018-08-20 06:20:07,404 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:07,425 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:07,425 p=1013 u=mistral | Monday 20 August 2018 06:20:07 -0400 (0:00:00.036) 0:00:49.455 ********* >2018-08-20 06:20:07,478 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "b55fda33-a25f-49a9-a08a-4f2388d1b608"}, "changed": false} >2018-08-20 06:20:07,502 p=1013 u=mistral | TASK [Render deployment file for ControllerUpgradeInitDeployment] ************** >2018-08-20 06:20:07,502 p=1013 u=mistral | Monday 20 August 2018 06:20:07 -0400 (0:00:00.076) 0:00:49.532 ********* >2018-08-20 06:20:08,057 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "287c882e66d27a2cb41621f9faadac6c79efec0e", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerUpgradeInitDeployment-b55fda33-a25f-49a9-a08a-4f2388d1b608", "gid": 0, "group": "root", "md5sum": "e364368f874e314526246ceb7369e065", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1183, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760407.56-21382667815221/source", "state": "file", "uid": 0} >2018-08-20 06:20:08,081 p=1013 u=mistral | TASK [Check if deployed file exists for ControllerUpgradeInitDeployment] ******* >2018-08-20 06:20:08,081 p=1013 u=mistral | Monday 20 August 2018 06:20:08 -0400 (0:00:00.579) 0:00:50.111 ********* >2018-08-20 06:20:08,278 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:08,304 p=1013 u=mistral | TASK [Check previous deployment rc for ControllerUpgradeInitDeployment] ******** >2018-08-20 06:20:08,304 p=1013 u=mistral | Monday 20 August 2018 06:20:08 -0400 (0:00:00.223) 0:00:50.334 ********* >2018-08-20 06:20:08,323 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:08,348 p=1013 u=mistral | TASK [Remove deployed file for ControllerUpgradeInitDeployment when previous deployment failed] *** >2018-08-20 06:20:08,348 p=1013 u=mistral | Monday 20 August 2018 06:20:08 -0400 (0:00:00.043) 0:00:50.378 ********* >2018-08-20 06:20:08,364 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:08,392 p=1013 u=mistral | TASK [Force remove deployed file for ControllerUpgradeInitDeployment] ********** >2018-08-20 06:20:08,392 p=1013 u=mistral | Monday 20 August 2018 06:20:08 -0400 (0:00:00.044) 0:00:50.422 ********* >2018-08-20 06:20:08,412 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:08,436 p=1013 u=mistral | TASK [Run deployment ControllerUpgradeInitDeployment] ************************** >2018-08-20 06:20:08,436 p=1013 u=mistral | Monday 20 August 2018 06:20:08 -0400 (0:00:00.043) 0:00:50.466 ********* >2018-08-20 06:20:09,120 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.notify.json)", "delta": "0:00:00.441521", "end": "2018-08-20 06:20:09.100488", "rc": 0, "start": "2018-08-20 06:20:08.658967", "stderr": "[2018-08-20 06:20:08,685] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.json\n[2018-08-20 06:20:08,716] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:08,716] (heat-config) [DEBUG] [2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752\n[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6im6lhmnf4vv-0-l2yz7fnpvivx-ControllerUpgradeInitDeployment-zozqryuyriqs/ba5e63d9-1d21-4b23-926a-a503d4302612\n[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:20:08,708] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b55fda33-a25f-49a9-a08a-4f2388d1b608\n[2018-08-20 06:20:08,712] (heat-config) [INFO] \n[2018-08-20 06:20:08,712] (heat-config) [DEBUG] \n[2018-08-20 06:20:08,713] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b55fda33-a25f-49a9-a08a-4f2388d1b608\n\n[2018-08-20 06:20:08,716] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:08,716] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.json < /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.notify.json\n[2018-08-20 06:20:09,092] (heat-config) [INFO] \n[2018-08-20 06:20:09,093] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:08,685] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.json", "[2018-08-20 06:20:08,716] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:08,716] (heat-config) [DEBUG] [2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", "[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6im6lhmnf4vv-0-l2yz7fnpvivx-ControllerUpgradeInitDeployment-zozqryuyriqs/ba5e63d9-1d21-4b23-926a-a503d4302612", "[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:20:08,708] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b55fda33-a25f-49a9-a08a-4f2388d1b608", "[2018-08-20 06:20:08,712] (heat-config) [INFO] ", "[2018-08-20 06:20:08,712] (heat-config) [DEBUG] ", "[2018-08-20 06:20:08,713] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b55fda33-a25f-49a9-a08a-4f2388d1b608", "", "[2018-08-20 06:20:08,716] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:08,716] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.json < /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.notify.json", "[2018-08-20 06:20:09,092] (heat-config) [INFO] ", "[2018-08-20 06:20:09,093] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:09,146 p=1013 u=mistral | TASK [Output for ControllerUpgradeInitDeployment] ****************************** >2018-08-20 06:20:09,146 p=1013 u=mistral | Monday 20 August 2018 06:20:09 -0400 (0:00:00.709) 0:00:51.176 ********* >2018-08-20 06:20:09,193 p=1013 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:08,685] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.json", > "[2018-08-20 06:20:08,716] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:08,716] (heat-config) [DEBUG] [2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", > "[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6im6lhmnf4vv-0-l2yz7fnpvivx-ControllerUpgradeInitDeployment-zozqryuyriqs/ba5e63d9-1d21-4b23-926a-a503d4302612", > "[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:20:08,708] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:20:08,708] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b55fda33-a25f-49a9-a08a-4f2388d1b608", > "[2018-08-20 06:20:08,712] (heat-config) [INFO] ", > "[2018-08-20 06:20:08,712] (heat-config) [DEBUG] ", > "[2018-08-20 06:20:08,713] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b55fda33-a25f-49a9-a08a-4f2388d1b608", > "", > "[2018-08-20 06:20:08,716] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:08,716] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.json < /var/lib/heat-config/deployed/b55fda33-a25f-49a9-a08a-4f2388d1b608.notify.json", > "[2018-08-20 06:20:09,092] (heat-config) [INFO] ", > "[2018-08-20 06:20:09,093] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:09,217 p=1013 u=mistral | TASK [Check-mode for Run deployment ControllerUpgradeInitDeployment] *********** >2018-08-20 06:20:09,217 p=1013 u=mistral | Monday 20 August 2018 06:20:09 -0400 (0:00:00.071) 0:00:51.247 ********* >2018-08-20 06:20:09,231 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:09,254 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:09,255 p=1013 u=mistral | Monday 20 August 2018 06:20:09 -0400 (0:00:00.037) 0:00:51.285 ********* >2018-08-20 06:20:09,309 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "5e7450ab-13a0-4a87-808d-013e7cd738d1"}, "changed": false} >2018-08-20 06:20:09,333 p=1013 u=mistral | TASK [Render deployment file for CADeployment] ********************************* >2018-08-20 06:20:09,333 p=1013 u=mistral | Monday 20 August 2018 06:20:09 -0400 (0:00:00.078) 0:00:51.363 ********* >2018-08-20 06:20:09,870 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "881db834ec0ee0a2f685bcf0ee51fc688b50eca0", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-5e7450ab-13a0-4a87-808d-013e7cd738d1", "gid": 0, "group": "root", "md5sum": "5b8c99755910ff4d285d4a560cbcd019", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2999, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760409.43-229321221656928/source", "state": "file", "uid": 0} >2018-08-20 06:20:09,895 p=1013 u=mistral | TASK [Check if deployed file exists for CADeployment] ************************** >2018-08-20 06:20:09,895 p=1013 u=mistral | Monday 20 August 2018 06:20:09 -0400 (0:00:00.561) 0:00:51.925 ********* >2018-08-20 06:20:10,125 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:10,149 p=1013 u=mistral | TASK [Check previous deployment rc for CADeployment] *************************** >2018-08-20 06:20:10,149 p=1013 u=mistral | Monday 20 August 2018 06:20:10 -0400 (0:00:00.254) 0:00:52.179 ********* >2018-08-20 06:20:10,167 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:10,191 p=1013 u=mistral | TASK [Remove deployed file for CADeployment when previous deployment failed] *** >2018-08-20 06:20:10,191 p=1013 u=mistral | Monday 20 August 2018 06:20:10 -0400 (0:00:00.041) 0:00:52.221 ********* >2018-08-20 06:20:10,208 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:10,232 p=1013 u=mistral | TASK [Force remove deployed file for CADeployment] ***************************** >2018-08-20 06:20:10,232 p=1013 u=mistral | Monday 20 August 2018 06:20:10 -0400 (0:00:00.041) 0:00:52.262 ********* >2018-08-20 06:20:10,248 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:10,277 p=1013 u=mistral | TASK [Run deployment CADeployment] ********************************************* >2018-08-20 06:20:10,277 p=1013 u=mistral | Monday 20 August 2018 06:20:10 -0400 (0:00:00.045) 0:00:52.307 ********* >2018-08-20 06:20:11,556 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.notify.json)", "delta": "0:00:01.085407", "end": "2018-08-20 06:20:11.535115", "rc": 0, "start": "2018-08-20 06:20:10.449708", "stderr": "[2018-08-20 06:20:10,474] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.json\n[2018-08-20 06:20:11,152] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"2584ba658ccddd60c9694324a8547fbd /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-08-20 06:20:11,152] (heat-config) [DEBUG] [2018-08-20 06:20:10,496] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-08-20 06:20:10,497] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJAKeXPqIlS80rMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODA4MjAwOTEyMjZaFw0xOTA4MjAwOTEyMjZaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOHsMZOBfdYsz5QF5FJB9EEJUBx5O+mX/iq6tWmkU/uK\nwJo7/7YK+QHvZyTLjGOuhLDH3gkfQ/aaDHlSG5EhLpHTkIGc8c0ABCEfmTlntjq4\nqiz+rpUUelvbM+EW8gZeIecXyf1p0Kwh8mE5jfyB4Gbf/+oeJmwaqmoWJzh2jmNy\ndP7fYpSmu3ZxbTwKT2NaIO+NLWrdRMrtMxlOHKwRZ06FgZ+mlT1RTYh3ebd+MbQg\nzsdYMQ2DTrS8panpYi2Z3Sysb+TanpRTsmRwRXncwdvufjvk5DJP+8Gzq2UP/VQB\nNfHQwIdmrcxI+d4fc3yELvypO7Qui6HWltItoeRfNX8CAwEAAaNQME4wHQYDVR0O\nBBYEFPjqPbuloOP/sUg/EHuGKkE6NgKQMB8GA1UdIwQYMBaAFPjqPbuloOP/sUg/\nEHuGKkE6NgKQMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKJh1k/0\nVC0HgmXjiiFF0HAZ5GYXA2QmD8HM8GOOBxVU26BL7a7TiY57l4MMVSx5ToIvEt0H\nvCkhdZIlv5EdlRfaAzTJ/TnrEq8DDslUPi4oskrHBb5pG2VEtFrXICMPEdHx9fxh\nxxYwkEMeIwoKqvFbDHy/xUQlJ8683HINYEqtLFEWTAvCICEi3vla4NXx08Qw5pTQ\nls8Tv/heAbREztkAcLClwV0qDpSpJDZGF5P6NoKz1+0cdOdZFykO2ncDjqi1S7HP\njeIi6AGdsRZW+Vm+p5WnRjTk/0glo2WDhxSLjbhI2Yr3EqB6Lyct3ZTMJIVZrIGl\nNj+B6Q2NoXe/7ws=\n-----END CERTIFICATE-----\n[2018-08-20 06:20:10,497] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752\n[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6im6lhmnf4vv-0-l2yz7fnpvivx-NodeTLSCAData-xzhaizlqvn4u-CADeployment-bzi6xa3okdfc/83932e3c-029c-4d85-869b-c3fcd57044ce\n[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:20:10,497] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5e7450ab-13a0-4a87-808d-013e7cd738d1\n[2018-08-20 06:20:11,148] (heat-config) [INFO] \n[2018-08-20 06:20:11,148] (heat-config) [DEBUG] \n[2018-08-20 06:20:11,148] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5e7450ab-13a0-4a87-808d-013e7cd738d1\n\n[2018-08-20 06:20:11,152] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:11,153] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.json < /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.notify.json\n[2018-08-20 06:20:11,528] (heat-config) [INFO] \n[2018-08-20 06:20:11,529] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:10,474] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.json", "[2018-08-20 06:20:11,152] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"2584ba658ccddd60c9694324a8547fbd /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-08-20 06:20:11,152] (heat-config) [DEBUG] [2018-08-20 06:20:10,496] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-08-20 06:20:10,497] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJAKeXPqIlS80rMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODA4MjAwOTEyMjZaFw0xOTA4MjAwOTEyMjZaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAOHsMZOBfdYsz5QF5FJB9EEJUBx5O+mX/iq6tWmkU/uK", "wJo7/7YK+QHvZyTLjGOuhLDH3gkfQ/aaDHlSG5EhLpHTkIGc8c0ABCEfmTlntjq4", "qiz+rpUUelvbM+EW8gZeIecXyf1p0Kwh8mE5jfyB4Gbf/+oeJmwaqmoWJzh2jmNy", "dP7fYpSmu3ZxbTwKT2NaIO+NLWrdRMrtMxlOHKwRZ06FgZ+mlT1RTYh3ebd+MbQg", "zsdYMQ2DTrS8panpYi2Z3Sysb+TanpRTsmRwRXncwdvufjvk5DJP+8Gzq2UP/VQB", "NfHQwIdmrcxI+d4fc3yELvypO7Qui6HWltItoeRfNX8CAwEAAaNQME4wHQYDVR0O", "BBYEFPjqPbuloOP/sUg/EHuGKkE6NgKQMB8GA1UdIwQYMBaAFPjqPbuloOP/sUg/", "EHuGKkE6NgKQMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKJh1k/0", "VC0HgmXjiiFF0HAZ5GYXA2QmD8HM8GOOBxVU26BL7a7TiY57l4MMVSx5ToIvEt0H", "vCkhdZIlv5EdlRfaAzTJ/TnrEq8DDslUPi4oskrHBb5pG2VEtFrXICMPEdHx9fxh", "xxYwkEMeIwoKqvFbDHy/xUQlJ8683HINYEqtLFEWTAvCICEi3vla4NXx08Qw5pTQ", "ls8Tv/heAbREztkAcLClwV0qDpSpJDZGF5P6NoKz1+0cdOdZFykO2ncDjqi1S7HP", "jeIi6AGdsRZW+Vm+p5WnRjTk/0glo2WDhxSLjbhI2Yr3EqB6Lyct3ZTMJIVZrIGl", "Nj+B6Q2NoXe/7ws=", "-----END CERTIFICATE-----", "[2018-08-20 06:20:10,497] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6im6lhmnf4vv-0-l2yz7fnpvivx-NodeTLSCAData-xzhaizlqvn4u-CADeployment-bzi6xa3okdfc/83932e3c-029c-4d85-869b-c3fcd57044ce", "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:20:10,497] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5e7450ab-13a0-4a87-808d-013e7cd738d1", "[2018-08-20 06:20:11,148] (heat-config) [INFO] ", "[2018-08-20 06:20:11,148] (heat-config) [DEBUG] ", "[2018-08-20 06:20:11,148] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5e7450ab-13a0-4a87-808d-013e7cd738d1", "", "[2018-08-20 06:20:11,152] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:11,153] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.json < /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.notify.json", "[2018-08-20 06:20:11,528] (heat-config) [INFO] ", "[2018-08-20 06:20:11,529] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:11,580 p=1013 u=mistral | TASK [Output for CADeployment] ************************************************* >2018-08-20 06:20:11,580 p=1013 u=mistral | Monday 20 August 2018 06:20:11 -0400 (0:00:01.302) 0:00:53.610 ********* >2018-08-20 06:20:11,675 p=1013 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:10,474] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.json", > "[2018-08-20 06:20:11,152] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"2584ba658ccddd60c9694324a8547fbd /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-08-20 06:20:11,152] (heat-config) [DEBUG] [2018-08-20 06:20:10,496] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-08-20 06:20:10,497] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJAKeXPqIlS80rMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODA4MjAwOTEyMjZaFw0xOTA4MjAwOTEyMjZaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAOHsMZOBfdYsz5QF5FJB9EEJUBx5O+mX/iq6tWmkU/uK", > "wJo7/7YK+QHvZyTLjGOuhLDH3gkfQ/aaDHlSG5EhLpHTkIGc8c0ABCEfmTlntjq4", > "qiz+rpUUelvbM+EW8gZeIecXyf1p0Kwh8mE5jfyB4Gbf/+oeJmwaqmoWJzh2jmNy", > "dP7fYpSmu3ZxbTwKT2NaIO+NLWrdRMrtMxlOHKwRZ06FgZ+mlT1RTYh3ebd+MbQg", > "zsdYMQ2DTrS8panpYi2Z3Sysb+TanpRTsmRwRXncwdvufjvk5DJP+8Gzq2UP/VQB", > "NfHQwIdmrcxI+d4fc3yELvypO7Qui6HWltItoeRfNX8CAwEAAaNQME4wHQYDVR0O", > "BBYEFPjqPbuloOP/sUg/EHuGKkE6NgKQMB8GA1UdIwQYMBaAFPjqPbuloOP/sUg/", > "EHuGKkE6NgKQMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKJh1k/0", > "VC0HgmXjiiFF0HAZ5GYXA2QmD8HM8GOOBxVU26BL7a7TiY57l4MMVSx5ToIvEt0H", > "vCkhdZIlv5EdlRfaAzTJ/TnrEq8DDslUPi4oskrHBb5pG2VEtFrXICMPEdHx9fxh", > "xxYwkEMeIwoKqvFbDHy/xUQlJ8683HINYEqtLFEWTAvCICEi3vla4NXx08Qw5pTQ", > "ls8Tv/heAbREztkAcLClwV0qDpSpJDZGF5P6NoKz1+0cdOdZFykO2ncDjqi1S7HP", > "jeIi6AGdsRZW+Vm+p5WnRjTk/0glo2WDhxSLjbhI2Yr3EqB6Lyct3ZTMJIVZrIGl", > "Nj+B6Q2NoXe/7ws=", > "-----END CERTIFICATE-----", > "[2018-08-20 06:20:10,497] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", > "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_stack_id=overcloud-Controller-6im6lhmnf4vv-0-l2yz7fnpvivx-NodeTLSCAData-xzhaizlqvn4u-CADeployment-bzi6xa3okdfc/83932e3c-029c-4d85-869b-c3fcd57044ce", > "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:20:10,497] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:20:10,497] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/5e7450ab-13a0-4a87-808d-013e7cd738d1", > "[2018-08-20 06:20:11,148] (heat-config) [INFO] ", > "[2018-08-20 06:20:11,148] (heat-config) [DEBUG] ", > "[2018-08-20 06:20:11,148] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/5e7450ab-13a0-4a87-808d-013e7cd738d1", > "", > "[2018-08-20 06:20:11,152] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:11,153] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.json < /var/lib/heat-config/deployed/5e7450ab-13a0-4a87-808d-013e7cd738d1.notify.json", > "[2018-08-20 06:20:11,528] (heat-config) [INFO] ", > "[2018-08-20 06:20:11,529] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:11,700 p=1013 u=mistral | TASK [Check-mode for Run deployment CADeployment] ****************************** >2018-08-20 06:20:11,700 p=1013 u=mistral | Monday 20 August 2018 06:20:11 -0400 (0:00:00.119) 0:00:53.730 ********* >2018-08-20 06:20:11,713 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:11,734 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:11,734 p=1013 u=mistral | Monday 20 August 2018 06:20:11 -0400 (0:00:00.034) 0:00:53.764 ********* >2018-08-20 06:20:12,103 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "1a1b72c5-fe46-4581-b47a-827241a7aed1"}, "changed": false} >2018-08-20 06:20:12,129 p=1013 u=mistral | TASK [Render deployment file for ControllerDeployment] ************************* >2018-08-20 06:20:12,129 p=1013 u=mistral | Monday 20 August 2018 06:20:12 -0400 (0:00:00.394) 0:00:54.159 ********* >2018-08-20 06:20:12,983 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "6e6ee14542c28bf4ccdf6321b5ff3d3c28a532cb", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerDeployment-1a1b72c5-fe46-4581-b47a-827241a7aed1", "gid": 0, "group": "root", "md5sum": "dca42daea220c9c6f9f6038ead6f23ec", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 73392, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760412.52-245079307263952/source", "state": "file", "uid": 0} >2018-08-20 06:20:13,006 p=1013 u=mistral | TASK [Check if deployed file exists for ControllerDeployment] ****************** >2018-08-20 06:20:13,006 p=1013 u=mistral | Monday 20 August 2018 06:20:13 -0400 (0:00:00.876) 0:00:55.036 ********* >2018-08-20 06:20:13,239 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:13,264 p=1013 u=mistral | TASK [Check previous deployment rc for ControllerDeployment] ******************* >2018-08-20 06:20:13,264 p=1013 u=mistral | Monday 20 August 2018 06:20:13 -0400 (0:00:00.258) 0:00:55.294 ********* >2018-08-20 06:20:13,283 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:13,305 p=1013 u=mistral | TASK [Remove deployed file for ControllerDeployment when previous deployment failed] *** >2018-08-20 06:20:13,305 p=1013 u=mistral | Monday 20 August 2018 06:20:13 -0400 (0:00:00.040) 0:00:55.335 ********* >2018-08-20 06:20:13,322 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:13,347 p=1013 u=mistral | TASK [Force remove deployed file for ControllerDeployment] ********************* >2018-08-20 06:20:13,347 p=1013 u=mistral | Monday 20 August 2018 06:20:13 -0400 (0:00:00.042) 0:00:55.377 ********* >2018-08-20 06:20:13,364 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:13,387 p=1013 u=mistral | TASK [Run deployment ControllerDeployment] ************************************* >2018-08-20 06:20:13,388 p=1013 u=mistral | Monday 20 August 2018 06:20:13 -0400 (0:00:00.040) 0:00:55.417 ********* >2018-08-20 06:20:14,179 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.notify.json)", "delta": "0:00:00.554456", "end": "2018-08-20 06:20:14.160758", "rc": 0, "start": "2018-08-20 06:20:13.606302", "stderr": "[2018-08-20 06:20:13,637] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.json\n[2018-08-20 06:20:13,753] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:13,753] (heat-config) [DEBUG] \n[2018-08-20 06:20:13,753] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-08-20 06:20:13,754] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.json < /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.notify.json\n[2018-08-20 06:20:14,153] (heat-config) [INFO] \n[2018-08-20 06:20:14,153] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:13,637] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.json", "[2018-08-20 06:20:13,753] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:13,753] (heat-config) [DEBUG] ", "[2018-08-20 06:20:13,753] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-08-20 06:20:13,754] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.json < /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.notify.json", "[2018-08-20 06:20:14,153] (heat-config) [INFO] ", "[2018-08-20 06:20:14,153] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:14,205 p=1013 u=mistral | TASK [Output for ControllerDeployment] ***************************************** >2018-08-20 06:20:14,205 p=1013 u=mistral | Monday 20 August 2018 06:20:14 -0400 (0:00:00.817) 0:00:56.235 ********* >2018-08-20 06:20:14,256 p=1013 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:13,637] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.json", > "[2018-08-20 06:20:13,753] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:13,753] (heat-config) [DEBUG] ", > "[2018-08-20 06:20:13,753] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-08-20 06:20:13,754] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.json < /var/lib/heat-config/deployed/1a1b72c5-fe46-4581-b47a-827241a7aed1.notify.json", > "[2018-08-20 06:20:14,153] (heat-config) [INFO] ", > "[2018-08-20 06:20:14,153] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:14,281 p=1013 u=mistral | TASK [Check-mode for Run deployment ControllerDeployment] ********************** >2018-08-20 06:20:14,281 p=1013 u=mistral | Monday 20 August 2018 06:20:14 -0400 (0:00:00.076) 0:00:56.311 ********* >2018-08-20 06:20:14,295 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:14,319 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:14,319 p=1013 u=mistral | Monday 20 August 2018 06:20:14 -0400 (0:00:00.037) 0:00:56.349 ********* >2018-08-20 06:20:14,379 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "cc6aa566-a1de-4658-9fdc-ad8f23c98497"}, "changed": false} >2018-08-20 06:20:14,406 p=1013 u=mistral | TASK [Render deployment file for ControllerHostsDeployment] ******************** >2018-08-20 06:20:14,406 p=1013 u=mistral | Monday 20 August 2018 06:20:14 -0400 (0:00:00.086) 0:00:56.436 ********* >2018-08-20 06:20:14,937 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "3e4b4bd8e922f3f184c6378b7ece50572d9a6417", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostsDeployment-cc6aa566-a1de-4658-9fdc-ad8f23c98497", "gid": 0, "group": "root", "md5sum": "79a5270bfb8026ec36de83dbc8361f69", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4435, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760414.48-58635379981158/source", "state": "file", "uid": 0} >2018-08-20 06:20:14,962 p=1013 u=mistral | TASK [Check if deployed file exists for ControllerHostsDeployment] ************* >2018-08-20 06:20:14,962 p=1013 u=mistral | Monday 20 August 2018 06:20:14 -0400 (0:00:00.555) 0:00:56.992 ********* >2018-08-20 06:20:15,152 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:15,179 p=1013 u=mistral | TASK [Check previous deployment rc for ControllerHostsDeployment] ************** >2018-08-20 06:20:15,179 p=1013 u=mistral | Monday 20 August 2018 06:20:15 -0400 (0:00:00.217) 0:00:57.209 ********* >2018-08-20 06:20:15,197 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:15,221 p=1013 u=mistral | TASK [Remove deployed file for ControllerHostsDeployment when previous deployment failed] *** >2018-08-20 06:20:15,221 p=1013 u=mistral | Monday 20 August 2018 06:20:15 -0400 (0:00:00.041) 0:00:57.251 ********* >2018-08-20 06:20:15,240 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:15,264 p=1013 u=mistral | TASK [Force remove deployed file for ControllerHostsDeployment] **************** >2018-08-20 06:20:15,265 p=1013 u=mistral | Monday 20 August 2018 06:20:15 -0400 (0:00:00.043) 0:00:57.295 ********* >2018-08-20 06:20:15,281 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:15,304 p=1013 u=mistral | TASK [Run deployment ControllerHostsDeployment] ******************************** >2018-08-20 06:20:15,304 p=1013 u=mistral | Monday 20 August 2018 06:20:15 -0400 (0:00:00.039) 0:00:57.334 ********* >2018-08-20 06:20:15,935 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.notify.json)", "delta": "0:00:00.410894", "end": "2018-08-20 06:20:15.888670", "rc": 0, "start": "2018-08-20 06:20:15.477776", "stderr": "[2018-08-20 06:20:15,499] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.json\n[2018-08-20 06:20:15,550] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:15,550] (heat-config) [DEBUG] [2018-08-20 06:20:15,519] (heat-config) [INFO] hosts=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752\n[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-u6jbjn32dlvk-0-nxdklnilqgld/3e5f8f15-5358-4713-91b2-ca5c077d1ebb\n[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:20:15,519] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/cc6aa566-a1de-4658-9fdc-ad8f23c98497\n[2018-08-20 06:20:15,547] (heat-config) [INFO] \n[2018-08-20 06:20:15,547] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /controller-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-08-20 06:20:15,547] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/cc6aa566-a1de-4658-9fdc-ad8f23c98497\n\n[2018-08-20 06:20:15,550] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:15,551] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.json < /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.notify.json\n[2018-08-20 06:20:15,882] (heat-config) [INFO] \n[2018-08-20 06:20:15,882] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:15,499] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.json", "[2018-08-20 06:20:15,550] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:15,550] (heat-config) [DEBUG] [2018-08-20 06:20:15,519] (heat-config) [INFO] hosts=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-u6jbjn32dlvk-0-nxdklnilqgld/3e5f8f15-5358-4713-91b2-ca5c077d1ebb", "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:20:15,519] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/cc6aa566-a1de-4658-9fdc-ad8f23c98497", "[2018-08-20 06:20:15,547] (heat-config) [INFO] ", "[2018-08-20 06:20:15,547] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /controller-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-08-20 06:20:15,547] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/cc6aa566-a1de-4658-9fdc-ad8f23c98497", "", "[2018-08-20 06:20:15,550] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:15,551] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.json < /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.notify.json", "[2018-08-20 06:20:15,882] (heat-config) [INFO] ", "[2018-08-20 06:20:15,882] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:15,979 p=1013 u=mistral | TASK [Output for ControllerHostsDeployment] ************************************ >2018-08-20 06:20:15,979 p=1013 u=mistral | Monday 20 August 2018 06:20:15 -0400 (0:00:00.674) 0:00:58.009 ********* >2018-08-20 06:20:16,059 p=1013 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:15,499] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.json", > "[2018-08-20 06:20:15,550] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /controller-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:15,550] (heat-config) [DEBUG] [2018-08-20 06:20:15,519] (heat-config) [INFO] hosts=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", > "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerHostsDeployment-u6jbjn32dlvk-0-nxdklnilqgld/3e5f8f15-5358-4713-91b2-ca5c077d1ebb", > "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:20:15,519] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:20:15,519] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/cc6aa566-a1de-4658-9fdc-ad8f23c98497", > "[2018-08-20 06:20:15,547] (heat-config) [INFO] ", > "[2018-08-20 06:20:15,547] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /controller-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-08-20 06:20:15,547] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/cc6aa566-a1de-4658-9fdc-ad8f23c98497", > "", > "[2018-08-20 06:20:15,550] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:15,551] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.json < /var/lib/heat-config/deployed/cc6aa566-a1de-4658-9fdc-ad8f23c98497.notify.json", > "[2018-08-20 06:20:15,882] (heat-config) [INFO] ", > "[2018-08-20 06:20:15,882] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:16,102 p=1013 u=mistral | TASK [Check-mode for Run deployment ControllerHostsDeployment] ***************** >2018-08-20 06:20:16,102 p=1013 u=mistral | Monday 20 August 2018 06:20:16 -0400 (0:00:00.122) 0:00:58.132 ********* >2018-08-20 06:20:16,118 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:16,142 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:16,142 p=1013 u=mistral | Monday 20 August 2018 06:20:16 -0400 (0:00:00.040) 0:00:58.172 ********* >2018-08-20 06:20:16,293 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "8d6f660d-1fb5-451d-935e-248c8d3661e4"}, "changed": false} >2018-08-20 06:20:16,319 p=1013 u=mistral | TASK [Render deployment file for ControllerAllNodesDeployment] ***************** >2018-08-20 06:20:16,319 p=1013 u=mistral | Monday 20 August 2018 06:20:16 -0400 (0:00:00.177) 0:00:58.349 ********* >2018-08-20 06:20:16,948 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "f6ac710fad9da67afcfca64a15c2d9d2c896773d", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesDeployment-8d6f660d-1fb5-451d-935e-248c8d3661e4", "gid": 0, "group": "root", "md5sum": "e3268e302bca3afb3146553718be3442", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19169, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760416.49-263417645427045/source", "state": "file", "uid": 0} >2018-08-20 06:20:16,973 p=1013 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesDeployment] ********** >2018-08-20 06:20:16,973 p=1013 u=mistral | Monday 20 August 2018 06:20:16 -0400 (0:00:00.653) 0:00:59.003 ********* >2018-08-20 06:20:17,163 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:17,189 p=1013 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesDeployment] *********** >2018-08-20 06:20:17,190 p=1013 u=mistral | Monday 20 August 2018 06:20:17 -0400 (0:00:00.216) 0:00:59.220 ********* >2018-08-20 06:20:17,207 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:17,232 p=1013 u=mistral | TASK [Remove deployed file for ControllerAllNodesDeployment when previous deployment failed] *** >2018-08-20 06:20:17,232 p=1013 u=mistral | Monday 20 August 2018 06:20:17 -0400 (0:00:00.042) 0:00:59.262 ********* >2018-08-20 06:20:17,252 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:17,274 p=1013 u=mistral | TASK [Force remove deployed file for ControllerAllNodesDeployment] ************* >2018-08-20 06:20:17,275 p=1013 u=mistral | Monday 20 August 2018 06:20:17 -0400 (0:00:00.042) 0:00:59.304 ********* >2018-08-20 06:20:17,290 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:17,315 p=1013 u=mistral | TASK [Run deployment ControllerAllNodesDeployment] ***************************** >2018-08-20 06:20:17,315 p=1013 u=mistral | Monday 20 August 2018 06:20:17 -0400 (0:00:00.040) 0:00:59.345 ********* >2018-08-20 06:20:18,007 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.notify.json)", "delta": "0:00:00.502255", "end": "2018-08-20 06:20:17.988059", "rc": 0, "start": "2018-08-20 06:20:17.485804", "stderr": "[2018-08-20 06:20:17,512] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.json\n[2018-08-20 06:20:17,627] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:17,627] (heat-config) [DEBUG] \n[2018-08-20 06:20:17,627] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-08-20 06:20:17,627] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.json < /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.notify.json\n[2018-08-20 06:20:17,981] (heat-config) [INFO] \n[2018-08-20 06:20:17,982] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:17,512] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.json", "[2018-08-20 06:20:17,627] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:17,627] (heat-config) [DEBUG] ", "[2018-08-20 06:20:17,627] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-08-20 06:20:17,627] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.json < /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.notify.json", "[2018-08-20 06:20:17,981] (heat-config) [INFO] ", "[2018-08-20 06:20:17,982] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:18,031 p=1013 u=mistral | TASK [Output for ControllerAllNodesDeployment] ********************************* >2018-08-20 06:20:18,031 p=1013 u=mistral | Monday 20 August 2018 06:20:18 -0400 (0:00:00.716) 0:01:00.061 ********* >2018-08-20 06:20:18,079 p=1013 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:17,512] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.json", > "[2018-08-20 06:20:17,627] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:17,627] (heat-config) [DEBUG] ", > "[2018-08-20 06:20:17,627] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-08-20 06:20:17,627] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.json < /var/lib/heat-config/deployed/8d6f660d-1fb5-451d-935e-248c8d3661e4.notify.json", > "[2018-08-20 06:20:17,981] (heat-config) [INFO] ", > "[2018-08-20 06:20:17,982] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:18,103 p=1013 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesDeployment] ************** >2018-08-20 06:20:18,104 p=1013 u=mistral | Monday 20 August 2018 06:20:18 -0400 (0:00:00.072) 0:01:00.134 ********* >2018-08-20 06:20:18,117 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:18,138 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:18,138 p=1013 u=mistral | Monday 20 August 2018 06:20:18 -0400 (0:00:00.034) 0:01:00.168 ********* >2018-08-20 06:20:18,193 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "b9dc0aca-4aaa-4c02-96ce-39559a23f050"}, "changed": false} >2018-08-20 06:20:18,217 p=1013 u=mistral | TASK [Render deployment file for ControllerAllNodesValidationDeployment] ******* >2018-08-20 06:20:18,217 p=1013 u=mistral | Monday 20 August 2018 06:20:18 -0400 (0:00:00.079) 0:01:00.247 ********* >2018-08-20 06:20:18,744 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "da348ca3486be1849b483f8f7ae7b9c9e8f116aa", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerAllNodesValidationDeployment-b9dc0aca-4aaa-4c02-96ce-39559a23f050", "gid": 0, "group": "root", "md5sum": "108e20167f23a19a81b11c8401a1ec75", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4941, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760418.27-179325531404565/source", "state": "file", "uid": 0} >2018-08-20 06:20:18,770 p=1013 u=mistral | TASK [Check if deployed file exists for ControllerAllNodesValidationDeployment] *** >2018-08-20 06:20:18,770 p=1013 u=mistral | Monday 20 August 2018 06:20:18 -0400 (0:00:00.553) 0:01:00.800 ********* >2018-08-20 06:20:18,952 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:18,977 p=1013 u=mistral | TASK [Check previous deployment rc for ControllerAllNodesValidationDeployment] *** >2018-08-20 06:20:18,977 p=1013 u=mistral | Monday 20 August 2018 06:20:18 -0400 (0:00:00.206) 0:01:01.007 ********* >2018-08-20 06:20:18,994 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:19,021 p=1013 u=mistral | TASK [Remove deployed file for ControllerAllNodesValidationDeployment when previous deployment failed] *** >2018-08-20 06:20:19,021 p=1013 u=mistral | Monday 20 August 2018 06:20:19 -0400 (0:00:00.044) 0:01:01.051 ********* >2018-08-20 06:20:19,042 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:19,066 p=1013 u=mistral | TASK [Force remove deployed file for ControllerAllNodesValidationDeployment] *** >2018-08-20 06:20:19,066 p=1013 u=mistral | Monday 20 August 2018 06:20:19 -0400 (0:00:00.045) 0:01:01.096 ********* >2018-08-20 06:20:19,086 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:19,108 p=1013 u=mistral | TASK [Run deployment ControllerAllNodesValidationDeployment] ******************* >2018-08-20 06:20:19,109 p=1013 u=mistral | Monday 20 August 2018 06:20:19 -0400 (0:00:00.042) 0:01:01.139 ********* >2018-08-20 06:20:20,445 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.notify.json)", "delta": "0:00:01.143366", "end": "2018-08-20 06:20:20.422890", "rc": 0, "start": "2018-08-20 06:20:19.279524", "stderr": "[2018-08-20 06:20:19,304] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.json\n[2018-08-20 06:20:20,030] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.105 for local network 10.0.0.0/24.\\nPing to 10.0.0.105 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.26 for local network 172.17.2.0/24.\\nPing to 172.17.2.26 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\\nPing to 172.17.3.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.12 for local network 172.17.4.0/24.\\nPing to 172.17.4.12 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:20,030] (heat-config) [DEBUG] [2018-08-20 06:20:19,325] (heat-config) [INFO] ping_test_ips=172.17.3.14 172.17.4.12 172.17.1.16 172.17.2.26 10.0.0.105 192.168.24.12\n[2018-08-20 06:20:19,325] (heat-config) [INFO] validate_fqdn=False\n[2018-08-20 06:20:19,325] (heat-config) [INFO] validate_ntp=True\n[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752\n[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-53ng5odwidp2-0-dcnwwftdkifv/6a630dfc-4eee-4326-8da9-4a4e2615b4b2\n[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:20:19,326] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b9dc0aca-4aaa-4c02-96ce-39559a23f050\n[2018-08-20 06:20:20,025] (heat-config) [INFO] Trying to ping 10.0.0.105 for local network 10.0.0.0/24.\nPing to 10.0.0.105 succeeded.\nSUCCESS\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\nPing to 172.17.1.16 succeeded.\nSUCCESS\nTrying to ping 172.17.2.26 for local network 172.17.2.0/24.\nPing to 172.17.2.26 succeeded.\nSUCCESS\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\nPing to 172.17.3.14 succeeded.\nSUCCESS\nTrying to ping 172.17.4.12 for local network 172.17.4.0/24.\nPing to 172.17.4.12 succeeded.\nSUCCESS\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\nPing to 192.168.24.12 succeeded.\nSUCCESS\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-08-20 06:20:20,026] (heat-config) [DEBUG] \n[2018-08-20 06:20:20,026] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b9dc0aca-4aaa-4c02-96ce-39559a23f050\n\n[2018-08-20 06:20:20,030] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:20,030] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.json < /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.notify.json\n[2018-08-20 06:20:20,416] (heat-config) [INFO] \n[2018-08-20 06:20:20,416] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:19,304] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.json", "[2018-08-20 06:20:20,030] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.105 for local network 10.0.0.0/24.\\nPing to 10.0.0.105 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.26 for local network 172.17.2.0/24.\\nPing to 172.17.2.26 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\\nPing to 172.17.3.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.12 for local network 172.17.4.0/24.\\nPing to 172.17.4.12 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:20,030] (heat-config) [DEBUG] [2018-08-20 06:20:19,325] (heat-config) [INFO] ping_test_ips=172.17.3.14 172.17.4.12 172.17.1.16 172.17.2.26 10.0.0.105 192.168.24.12", "[2018-08-20 06:20:19,325] (heat-config) [INFO] validate_fqdn=False", "[2018-08-20 06:20:19,325] (heat-config) [INFO] validate_ntp=True", "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-53ng5odwidp2-0-dcnwwftdkifv/6a630dfc-4eee-4326-8da9-4a4e2615b4b2", "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:20:19,326] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b9dc0aca-4aaa-4c02-96ce-39559a23f050", "[2018-08-20 06:20:20,025] (heat-config) [INFO] Trying to ping 10.0.0.105 for local network 10.0.0.0/24.", "Ping to 10.0.0.105 succeeded.", "SUCCESS", "Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", "Ping to 172.17.1.16 succeeded.", "SUCCESS", "Trying to ping 172.17.2.26 for local network 172.17.2.0/24.", "Ping to 172.17.2.26 succeeded.", "SUCCESS", "Trying to ping 172.17.3.14 for local network 172.17.3.0/24.", "Ping to 172.17.3.14 succeeded.", "SUCCESS", "Trying to ping 172.17.4.12 for local network 172.17.4.0/24.", "Ping to 172.17.4.12 succeeded.", "SUCCESS", "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", "Ping to 192.168.24.12 succeeded.", "SUCCESS", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-08-20 06:20:20,026] (heat-config) [DEBUG] ", "[2018-08-20 06:20:20,026] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b9dc0aca-4aaa-4c02-96ce-39559a23f050", "", "[2018-08-20 06:20:20,030] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:20,030] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.json < /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.notify.json", "[2018-08-20 06:20:20,416] (heat-config) [INFO] ", "[2018-08-20 06:20:20,416] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:20,472 p=1013 u=mistral | TASK [Output for ControllerAllNodesValidationDeployment] *********************** >2018-08-20 06:20:20,473 p=1013 u=mistral | Monday 20 August 2018 06:20:20 -0400 (0:00:01.364) 0:01:02.503 ********* >2018-08-20 06:20:20,526 p=1013 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:19,304] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.json", > "[2018-08-20 06:20:20,030] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.105 for local network 10.0.0.0/24.\\nPing to 10.0.0.105 succeeded.\\nSUCCESS\\nTrying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.26 for local network 172.17.2.0/24.\\nPing to 172.17.2.26 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\\nPing to 172.17.3.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.12 for local network 172.17.4.0/24.\\nPing to 172.17.4.12 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:20,030] (heat-config) [DEBUG] [2018-08-20 06:20:19,325] (heat-config) [INFO] ping_test_ips=172.17.3.14 172.17.4.12 172.17.1.16 172.17.2.26 10.0.0.105 192.168.24.12", > "[2018-08-20 06:20:19,325] (heat-config) [INFO] validate_fqdn=False", > "[2018-08-20 06:20:19,325] (heat-config) [INFO] validate_ntp=True", > "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", > "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_stack_id=overcloud-ControllerAllNodesValidationDeployment-53ng5odwidp2-0-dcnwwftdkifv/6a630dfc-4eee-4326-8da9-4a4e2615b4b2", > "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:20:19,325] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:20:19,326] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b9dc0aca-4aaa-4c02-96ce-39559a23f050", > "[2018-08-20 06:20:20,025] (heat-config) [INFO] Trying to ping 10.0.0.105 for local network 10.0.0.0/24.", > "Ping to 10.0.0.105 succeeded.", > "SUCCESS", > "Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", > "Ping to 172.17.1.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.26 for local network 172.17.2.0/24.", > "Ping to 172.17.2.26 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.14 for local network 172.17.3.0/24.", > "Ping to 172.17.3.14 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.12 for local network 172.17.4.0/24.", > "Ping to 172.17.4.12 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", > "Ping to 192.168.24.12 succeeded.", > "SUCCESS", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-08-20 06:20:20,026] (heat-config) [DEBUG] ", > "[2018-08-20 06:20:20,026] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b9dc0aca-4aaa-4c02-96ce-39559a23f050", > "", > "[2018-08-20 06:20:20,030] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:20,030] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.json < /var/lib/heat-config/deployed/b9dc0aca-4aaa-4c02-96ce-39559a23f050.notify.json", > "[2018-08-20 06:20:20,416] (heat-config) [INFO] ", > "[2018-08-20 06:20:20,416] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:20,552 p=1013 u=mistral | TASK [Check-mode for Run deployment ControllerAllNodesValidationDeployment] **** >2018-08-20 06:20:20,552 p=1013 u=mistral | Monday 20 August 2018 06:20:20 -0400 (0:00:00.079) 0:01:02.582 ********* >2018-08-20 06:20:20,569 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:20,592 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:20,592 p=1013 u=mistral | Monday 20 August 2018 06:20:20 -0400 (0:00:00.040) 0:01:02.622 ********* >2018-08-20 06:20:20,668 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "fc79f36c-2376-4579-8392-d7e5e9098062"}, "changed": false} >2018-08-20 06:20:20,690 p=1013 u=mistral | TASK [Render deployment file for ControllerHostPrepDeployment] ***************** >2018-08-20 06:20:20,690 p=1013 u=mistral | Monday 20 August 2018 06:20:20 -0400 (0:00:00.097) 0:01:02.720 ********* >2018-08-20 06:20:21,225 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "4ca6eb19c7468487db0aa964821733d5af3cf84a", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerHostPrepDeployment-fc79f36c-2376-4579-8392-d7e5e9098062", "gid": 0, "group": "root", "md5sum": "85da01880b9118a335cac677f990b6c1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 20020, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760420.76-230099010101763/source", "state": "file", "uid": 0} >2018-08-20 06:20:21,252 p=1013 u=mistral | TASK [Check if deployed file exists for ControllerHostPrepDeployment] ********** >2018-08-20 06:20:21,252 p=1013 u=mistral | Monday 20 August 2018 06:20:21 -0400 (0:00:00.562) 0:01:03.282 ********* >2018-08-20 06:20:21,440 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:21,466 p=1013 u=mistral | TASK [Check previous deployment rc for ControllerHostPrepDeployment] *********** >2018-08-20 06:20:21,466 p=1013 u=mistral | Monday 20 August 2018 06:20:21 -0400 (0:00:00.214) 0:01:03.496 ********* >2018-08-20 06:20:21,484 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:21,509 p=1013 u=mistral | TASK [Remove deployed file for ControllerHostPrepDeployment when previous deployment failed] *** >2018-08-20 06:20:21,509 p=1013 u=mistral | Monday 20 August 2018 06:20:21 -0400 (0:00:00.042) 0:01:03.539 ********* >2018-08-20 06:20:21,525 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:21,550 p=1013 u=mistral | TASK [Force remove deployed file for ControllerHostPrepDeployment] ************* >2018-08-20 06:20:21,550 p=1013 u=mistral | Monday 20 August 2018 06:20:21 -0400 (0:00:00.040) 0:01:03.580 ********* >2018-08-20 06:20:21,568 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:21,592 p=1013 u=mistral | TASK [Run deployment ControllerHostPrepDeployment] ***************************** >2018-08-20 06:20:21,593 p=1013 u=mistral | Monday 20 August 2018 06:20:21 -0400 (0:00:00.042) 0:01:03.623 ********* >2018-08-20 06:20:28,026 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.notify.json)", "delta": "0:00:06.233485", "end": "2018-08-20 06:20:28.005893", "rc": 0, "start": "2018-08-20 06:20:21.772408", "stderr": "[2018-08-20 06:20:21,796] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.json\n[2018-08-20 06:20:27,600] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:27,601] (heat-config) [DEBUG] [2018-08-20 06:20:21,819] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fc79f36c-2376-4579-8392-d7e5e9098062_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fc79f36c-2376-4579-8392-d7e5e9098062_variables.json\n[2018-08-20 06:20:27,597] (heat-config) [INFO] Return code 0\n[2018-08-20 06:20:27,597] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-08-20 06:20:27,597] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fc79f36c-2376-4579-8392-d7e5e9098062_playbook.yaml\n\n[2018-08-20 06:20:27,601] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-08-20 06:20:27,601] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.json < /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.notify.json\n[2018-08-20 06:20:27,999] (heat-config) [INFO] \n[2018-08-20 06:20:27,999] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:21,796] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.json", "[2018-08-20 06:20:27,600] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:27,601] (heat-config) [DEBUG] [2018-08-20 06:20:21,819] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fc79f36c-2376-4579-8392-d7e5e9098062_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fc79f36c-2376-4579-8392-d7e5e9098062_variables.json", "[2018-08-20 06:20:27,597] (heat-config) [INFO] Return code 0", "[2018-08-20 06:20:27,597] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-08-20 06:20:27,597] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fc79f36c-2376-4579-8392-d7e5e9098062_playbook.yaml", "", "[2018-08-20 06:20:27,601] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-08-20 06:20:27,601] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.json < /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.notify.json", "[2018-08-20 06:20:27,999] (heat-config) [INFO] ", "[2018-08-20 06:20:27,999] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:28,052 p=1013 u=mistral | TASK [Output for ControllerHostPrepDeployment] ********************************* >2018-08-20 06:20:28,053 p=1013 u=mistral | Monday 20 August 2018 06:20:28 -0400 (0:00:06.459) 0:01:10.082 ********* >2018-08-20 06:20:28,103 p=1013 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:21,796] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.json", > "[2018-08-20 06:20:27,600] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:27,601] (heat-config) [DEBUG] [2018-08-20 06:20:21,819] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/fc79f36c-2376-4579-8392-d7e5e9098062_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/fc79f36c-2376-4579-8392-d7e5e9098062_variables.json", > "[2018-08-20 06:20:27,597] (heat-config) [INFO] Return code 0", > "[2018-08-20 06:20:27,597] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-08-20 06:20:27,597] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/fc79f36c-2376-4579-8392-d7e5e9098062_playbook.yaml", > "", > "[2018-08-20 06:20:27,601] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-08-20 06:20:27,601] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.json < /var/lib/heat-config/deployed/fc79f36c-2376-4579-8392-d7e5e9098062.notify.json", > "[2018-08-20 06:20:27,999] (heat-config) [INFO] ", > "[2018-08-20 06:20:27,999] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:28,130 p=1013 u=mistral | TASK [Check-mode for Run deployment ControllerHostPrepDeployment] ************** >2018-08-20 06:20:28,130 p=1013 u=mistral | Monday 20 August 2018 06:20:28 -0400 (0:00:00.077) 0:01:10.160 ********* >2018-08-20 06:20:28,145 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:28,168 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:28,168 p=1013 u=mistral | Monday 20 August 2018 06:20:28 -0400 (0:00:00.038) 0:01:10.198 ********* >2018-08-20 06:20:28,222 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"deployment_uuid": "898b7457-9d9a-423e-b673-5811f7745282"}, "changed": false} >2018-08-20 06:20:28,246 p=1013 u=mistral | TASK [Render deployment file for ControllerArtifactsDeploy] ******************** >2018-08-20 06:20:28,247 p=1013 u=mistral | Monday 20 August 2018 06:20:28 -0400 (0:00:00.078) 0:01:10.277 ********* >2018-08-20 06:20:28,837 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "8904633780f1409efc7a8ea98807e35f2c0d8f2c", "dest": "/var/lib/heat-config/tripleo-config-download/ControllerArtifactsDeploy-898b7457-9d9a-423e-b673-5811f7745282", "gid": 0, "group": "root", "md5sum": "58f136b0108311422d4a91d84a21c623", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2021, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760428.36-213103981449791/source", "state": "file", "uid": 0} >2018-08-20 06:20:28,860 p=1013 u=mistral | TASK [Check if deployed file exists for ControllerArtifactsDeploy] ************* >2018-08-20 06:20:28,861 p=1013 u=mistral | Monday 20 August 2018 06:20:28 -0400 (0:00:00.613) 0:01:10.890 ********* >2018-08-20 06:20:29,094 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:29,120 p=1013 u=mistral | TASK [Check previous deployment rc for ControllerArtifactsDeploy] ************** >2018-08-20 06:20:29,120 p=1013 u=mistral | Monday 20 August 2018 06:20:29 -0400 (0:00:00.259) 0:01:11.150 ********* >2018-08-20 06:20:29,138 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:29,162 p=1013 u=mistral | TASK [Remove deployed file for ControllerArtifactsDeploy when previous deployment failed] *** >2018-08-20 06:20:29,162 p=1013 u=mistral | Monday 20 August 2018 06:20:29 -0400 (0:00:00.042) 0:01:11.192 ********* >2018-08-20 06:20:29,182 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:29,208 p=1013 u=mistral | TASK [Force remove deployed file for ControllerArtifactsDeploy] **************** >2018-08-20 06:20:29,208 p=1013 u=mistral | Monday 20 August 2018 06:20:29 -0400 (0:00:00.045) 0:01:11.238 ********* >2018-08-20 06:20:29,225 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:29,248 p=1013 u=mistral | TASK [Run deployment ControllerArtifactsDeploy] ******************************** >2018-08-20 06:20:29,248 p=1013 u=mistral | Monday 20 August 2018 06:20:29 -0400 (0:00:00.040) 0:01:11.278 ********* >2018-08-20 06:20:29,901 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.notify.json)", "delta": "0:00:00.420512", "end": "2018-08-20 06:20:29.881436", "rc": 0, "start": "2018-08-20 06:20:29.460924", "stderr": "[2018-08-20 06:20:29,485] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.json\n[2018-08-20 06:20:29,514] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:29,514] (heat-config) [DEBUG] [2018-08-20 06:20:29,504] (heat-config) [INFO] artifact_urls=\n[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752\n[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-7viusabteozk-ControllerArtifactsDeploy-ssbfkfjifwiw-0-nupy5lz2qu44/7c79e1a4-422e-42de-a1f7-e43b503da3cd\n[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:20:29,505] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/898b7457-9d9a-423e-b673-5811f7745282\n[2018-08-20 06:20:29,511] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-08-20 06:20:29,511] (heat-config) [DEBUG] \n[2018-08-20 06:20:29,511] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/898b7457-9d9a-423e-b673-5811f7745282\n\n[2018-08-20 06:20:29,514] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:29,514] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.json < /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.notify.json\n[2018-08-20 06:20:29,874] (heat-config) [INFO] \n[2018-08-20 06:20:29,874] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:29,485] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.json", "[2018-08-20 06:20:29,514] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:29,514] (heat-config) [DEBUG] [2018-08-20 06:20:29,504] (heat-config) [INFO] artifact_urls=", "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-7viusabteozk-ControllerArtifactsDeploy-ssbfkfjifwiw-0-nupy5lz2qu44/7c79e1a4-422e-42de-a1f7-e43b503da3cd", "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:20:29,505] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/898b7457-9d9a-423e-b673-5811f7745282", "[2018-08-20 06:20:29,511] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-08-20 06:20:29,511] (heat-config) [DEBUG] ", "[2018-08-20 06:20:29,511] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/898b7457-9d9a-423e-b673-5811f7745282", "", "[2018-08-20 06:20:29,514] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:29,514] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.json < /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.notify.json", "[2018-08-20 06:20:29,874] (heat-config) [INFO] ", "[2018-08-20 06:20:29,874] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:29,959 p=1013 u=mistral | TASK [Output for ControllerArtifactsDeploy] ************************************ >2018-08-20 06:20:29,960 p=1013 u=mistral | Monday 20 August 2018 06:20:29 -0400 (0:00:00.711) 0:01:11.990 ********* >2018-08-20 06:20:30,007 p=1013 u=mistral | ok: [controller-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:29,485] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.json", > "[2018-08-20 06:20:29,514] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:29,514] (heat-config) [DEBUG] [2018-08-20 06:20:29,504] (heat-config) [INFO] artifact_urls=", > "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_server_id=6b6c0959-e03c-43ff-aaad-2d2d48ec7752", > "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-7viusabteozk-ControllerArtifactsDeploy-ssbfkfjifwiw-0-nupy5lz2qu44/7c79e1a4-422e-42de-a1f7-e43b503da3cd", > "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:20:29,505] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:20:29,505] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/898b7457-9d9a-423e-b673-5811f7745282", > "[2018-08-20 06:20:29,511] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-08-20 06:20:29,511] (heat-config) [DEBUG] ", > "[2018-08-20 06:20:29,511] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/898b7457-9d9a-423e-b673-5811f7745282", > "", > "[2018-08-20 06:20:29,514] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:29,514] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.json < /var/lib/heat-config/deployed/898b7457-9d9a-423e-b673-5811f7745282.notify.json", > "[2018-08-20 06:20:29,874] (heat-config) [INFO] ", > "[2018-08-20 06:20:29,874] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:30,031 p=1013 u=mistral | TASK [Check-mode for Run deployment ControllerArtifactsDeploy] ***************** >2018-08-20 06:20:30,031 p=1013 u=mistral | Monday 20 August 2018 06:20:30 -0400 (0:00:00.071) 0:01:12.061 ********* >2018-08-20 06:20:30,044 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:30,067 p=1013 u=mistral | TASK [include_tasks] *********************************************************** >2018-08-20 06:20:30,067 p=1013 u=mistral | Monday 20 August 2018 06:20:30 -0400 (0:00:00.035) 0:01:12.097 ********* >2018-08-20 06:20:30,299 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Compute/deployments.yaml for compute-0 >2018-08-20 06:20:30,308 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Compute/deployments.yaml for compute-0 >2018-08-20 06:20:30,316 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Compute/deployments.yaml for compute-0 >2018-08-20 06:20:30,325 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Compute/deployments.yaml for compute-0 >2018-08-20 06:20:30,334 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Compute/deployments.yaml for compute-0 >2018-08-20 06:20:30,342 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Compute/deployments.yaml for compute-0 >2018-08-20 06:20:30,351 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Compute/deployments.yaml for compute-0 >2018-08-20 06:20:30,362 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Compute/deployments.yaml for compute-0 >2018-08-20 06:20:30,372 p=1013 u=mistral | included: /var/lib/mistral/overcloud/Compute/deployments.yaml for compute-0 >2018-08-20 06:20:30,416 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:30,417 p=1013 u=mistral | Monday 20 August 2018 06:20:30 -0400 (0:00:00.349) 0:01:12.447 ********* >2018-08-20 06:20:30,477 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "b938c369-7ada-4254-8a7c-9023b955fbf0"}, "changed": false} >2018-08-20 06:20:30,497 p=1013 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-08-20 06:20:30,497 p=1013 u=mistral | Monday 20 August 2018 06:20:30 -0400 (0:00:00.080) 0:01:12.527 ********* >2018-08-20 06:20:31,040 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "23a2be19498354e95fe65a0b62173231e9f77a03", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-b938c369-7ada-4254-8a7c-9023b955fbf0", "gid": 0, "group": "root", "md5sum": "8b96d84120625a90d3f05caa1ba94219", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9259, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760430.56-272560439791826/source", "state": "file", "uid": 0} >2018-08-20 06:20:31,062 p=1013 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-08-20 06:20:31,062 p=1013 u=mistral | Monday 20 August 2018 06:20:31 -0400 (0:00:00.564) 0:01:13.092 ********* >2018-08-20 06:20:31,247 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:31,270 p=1013 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-08-20 06:20:31,270 p=1013 u=mistral | Monday 20 August 2018 06:20:31 -0400 (0:00:00.207) 0:01:13.300 ********* >2018-08-20 06:20:31,291 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:31,311 p=1013 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-08-20 06:20:31,311 p=1013 u=mistral | Monday 20 August 2018 06:20:31 -0400 (0:00:00.040) 0:01:13.341 ********* >2018-08-20 06:20:31,332 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:31,355 p=1013 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-08-20 06:20:31,356 p=1013 u=mistral | Monday 20 August 2018 06:20:31 -0400 (0:00:00.044) 0:01:13.386 ********* >2018-08-20 06:20:31,376 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:31,396 p=1013 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-08-20 06:20:31,397 p=1013 u=mistral | Monday 20 August 2018 06:20:31 -0400 (0:00:00.040) 0:01:13.426 ********* >2018-08-20 06:20:51,602 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.notify.json)", "delta": "0:00:20.009450", "end": "2018-08-20 06:20:51.571748", "rc": 0, "start": "2018-08-20 06:20:31.562298", "stderr": "[2018-08-20 06:20:31,593] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.json\n[2018-08-20 06:20:51,133] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/08/20 06:20:32 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/08/20 06:20:32 AM] [INFO] Ifcfg net config provider created.\\n[2018/08/20 06:20:32 AM] [INFO] Not using any mapping file.\\n[2018/08/20 06:20:32 AM] [INFO] Finding active nics\\n[2018/08/20 06:20:32 AM] [INFO] eth1 is an embedded active nic\\n[2018/08/20 06:20:32 AM] [INFO] eth0 is an embedded active nic\\n[2018/08/20 06:20:32 AM] [INFO] eth2 is an embedded active nic\\n[2018/08/20 06:20:32 AM] [INFO] lo is not an active nic\\n[2018/08/20 06:20:32 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/08/20 06:20:32 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/08/20 06:20:32 AM] [INFO] nic3 mapped to: eth2\\n[2018/08/20 06:20:32 AM] [INFO] nic2 mapped to: eth1\\n[2018/08/20 06:20:32 AM] [INFO] nic1 mapped to: eth0\\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth0\\n[2018/08/20 06:20:32 AM] [INFO] adding custom route for interface: eth0\\n[2018/08/20 06:20:32 AM] [INFO] adding bridge: br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth1\\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan20\\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan30\\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan50\\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth2\\n[2018/08/20 06:20:32 AM] [INFO] applying network configs...\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth2\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth1\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth0\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/08/20 06:20:32 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] running ifup on interface: eth2\\n[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth1\\n[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth0\\n[2018/08/20 06:20:37 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:20:41 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:46 AM] [INFO] running ifup on interface: vlan50\\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:51,133] (heat-config) [DEBUG] [2018-08-20 06:20:31,616] (heat-config) [INFO] interface_name=nic1\n[2018-08-20 06:20:31,616] (heat-config) [INFO] bridge_name=br-ex\n[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3\n[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-5gsedhvmkf5g-0-h3rauqu6s7ri-NetworkDeployment-vh6p2ls7uv4a-TripleOSoftwareDeployment-odn42av35uxa/686bce03-99de-4aa7-86bc-84727d213aed\n[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:20:31,617] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b938c369-7ada-4254-8a7c-9023b955fbf0\n[2018-08-20 06:20:51,129] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-08-20 06:20:51,129] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/08/20 06:20:32 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/08/20 06:20:32 AM] [INFO] Ifcfg net config provider created.\n[2018/08/20 06:20:32 AM] [INFO] Not using any mapping file.\n[2018/08/20 06:20:32 AM] [INFO] Finding active nics\n[2018/08/20 06:20:32 AM] [INFO] eth1 is an embedded active nic\n[2018/08/20 06:20:32 AM] [INFO] eth0 is an embedded active nic\n[2018/08/20 06:20:32 AM] [INFO] eth2 is an embedded active nic\n[2018/08/20 06:20:32 AM] [INFO] lo is not an active nic\n[2018/08/20 06:20:32 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/08/20 06:20:32 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/08/20 06:20:32 AM] [INFO] nic3 mapped to: eth2\n[2018/08/20 06:20:32 AM] [INFO] nic2 mapped to: eth1\n[2018/08/20 06:20:32 AM] [INFO] nic1 mapped to: eth0\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth0\n[2018/08/20 06:20:32 AM] [INFO] adding custom route for interface: eth0\n[2018/08/20 06:20:32 AM] [INFO] adding bridge: br-isolated\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth1\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan20\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan30\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan50\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth2\n[2018/08/20 06:20:32 AM] [INFO] applying network configs...\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth2\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth1\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth0\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/08/20 06:20:32 AM] [INFO] running ifup on bridge: br-isolated\n[2018/08/20 06:20:32 AM] [INFO] running ifup on interface: eth2\n[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth1\n[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth0\n[2018/08/20 06:20:37 AM] [INFO] running ifup on interface: vlan20\n[2018/08/20 06:20:41 AM] [INFO] running ifup on interface: vlan30\n[2018/08/20 06:20:46 AM] [INFO] running ifup on interface: vlan50\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan20\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan30\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan50\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-08-20 06:20:51,129] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b938c369-7ada-4254-8a7c-9023b955fbf0\n\n[2018-08-20 06:20:51,133] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:51,134] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.json < /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.notify.json\n[2018-08-20 06:20:51,565] (heat-config) [INFO] \n[2018-08-20 06:20:51,565] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:31,593] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.json", "[2018-08-20 06:20:51,133] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/08/20 06:20:32 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/08/20 06:20:32 AM] [INFO] Ifcfg net config provider created.\\n[2018/08/20 06:20:32 AM] [INFO] Not using any mapping file.\\n[2018/08/20 06:20:32 AM] [INFO] Finding active nics\\n[2018/08/20 06:20:32 AM] [INFO] eth1 is an embedded active nic\\n[2018/08/20 06:20:32 AM] [INFO] eth0 is an embedded active nic\\n[2018/08/20 06:20:32 AM] [INFO] eth2 is an embedded active nic\\n[2018/08/20 06:20:32 AM] [INFO] lo is not an active nic\\n[2018/08/20 06:20:32 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/08/20 06:20:32 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/08/20 06:20:32 AM] [INFO] nic3 mapped to: eth2\\n[2018/08/20 06:20:32 AM] [INFO] nic2 mapped to: eth1\\n[2018/08/20 06:20:32 AM] [INFO] nic1 mapped to: eth0\\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth0\\n[2018/08/20 06:20:32 AM] [INFO] adding custom route for interface: eth0\\n[2018/08/20 06:20:32 AM] [INFO] adding bridge: br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth1\\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan20\\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan30\\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan50\\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth2\\n[2018/08/20 06:20:32 AM] [INFO] applying network configs...\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth2\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth1\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth0\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/08/20 06:20:32 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] running ifup on interface: eth2\\n[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth1\\n[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth0\\n[2018/08/20 06:20:37 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:20:41 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:46 AM] [INFO] running ifup on interface: vlan50\\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:51,133] (heat-config) [DEBUG] [2018-08-20 06:20:31,616] (heat-config) [INFO] interface_name=nic1", "[2018-08-20 06:20:31,616] (heat-config) [INFO] bridge_name=br-ex", "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-5gsedhvmkf5g-0-h3rauqu6s7ri-NetworkDeployment-vh6p2ls7uv4a-TripleOSoftwareDeployment-odn42av35uxa/686bce03-99de-4aa7-86bc-84727d213aed", "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:20:31,617] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b938c369-7ada-4254-8a7c-9023b955fbf0", "[2018-08-20 06:20:51,129] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-08-20 06:20:51,129] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/08/20 06:20:32 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/08/20 06:20:32 AM] [INFO] Ifcfg net config provider created.", "[2018/08/20 06:20:32 AM] [INFO] Not using any mapping file.", "[2018/08/20 06:20:32 AM] [INFO] Finding active nics", "[2018/08/20 06:20:32 AM] [INFO] eth1 is an embedded active nic", "[2018/08/20 06:20:32 AM] [INFO] eth0 is an embedded active nic", "[2018/08/20 06:20:32 AM] [INFO] eth2 is an embedded active nic", "[2018/08/20 06:20:32 AM] [INFO] lo is not an active nic", "[2018/08/20 06:20:32 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/08/20 06:20:32 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/08/20 06:20:32 AM] [INFO] nic3 mapped to: eth2", "[2018/08/20 06:20:32 AM] [INFO] nic2 mapped to: eth1", "[2018/08/20 06:20:32 AM] [INFO] nic1 mapped to: eth0", "[2018/08/20 06:20:32 AM] [INFO] adding interface: eth0", "[2018/08/20 06:20:32 AM] [INFO] adding custom route for interface: eth0", "[2018/08/20 06:20:32 AM] [INFO] adding bridge: br-isolated", "[2018/08/20 06:20:32 AM] [INFO] adding interface: eth1", "[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan20", "[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan30", "[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan50", "[2018/08/20 06:20:32 AM] [INFO] adding interface: eth2", "[2018/08/20 06:20:32 AM] [INFO] applying network configs...", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth2", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth1", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth0", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50", "[2018/08/20 06:20:32 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/08/20 06:20:32 AM] [INFO] running ifup on bridge: br-isolated", "[2018/08/20 06:20:32 AM] [INFO] running ifup on interface: eth2", "[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth1", "[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth0", "[2018/08/20 06:20:37 AM] [INFO] running ifup on interface: vlan20", "[2018/08/20 06:20:41 AM] [INFO] running ifup on interface: vlan30", "[2018/08/20 06:20:46 AM] [INFO] running ifup on interface: vlan50", "[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan20", "[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan30", "[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan50", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-08-20 06:20:51,129] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b938c369-7ada-4254-8a7c-9023b955fbf0", "", "[2018-08-20 06:20:51,133] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:51,134] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.json < /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.notify.json", "[2018-08-20 06:20:51,565] (heat-config) [INFO] ", "[2018-08-20 06:20:51,565] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:51,628 p=1013 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-08-20 06:20:51,629 p=1013 u=mistral | Monday 20 August 2018 06:20:51 -0400 (0:00:20.232) 0:01:33.659 ********* >2018-08-20 06:20:51,774 p=1013 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:31,593] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.json", > "[2018-08-20 06:20:51,133] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.13/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.1.25/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 20}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.28/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.2.19/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 50}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}, {\\\"name\\\": \\\"nic3\\\", \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/08/20 06:20:32 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/08/20 06:20:32 AM] [INFO] Ifcfg net config provider created.\\n[2018/08/20 06:20:32 AM] [INFO] Not using any mapping file.\\n[2018/08/20 06:20:32 AM] [INFO] Finding active nics\\n[2018/08/20 06:20:32 AM] [INFO] eth1 is an embedded active nic\\n[2018/08/20 06:20:32 AM] [INFO] eth0 is an embedded active nic\\n[2018/08/20 06:20:32 AM] [INFO] eth2 is an embedded active nic\\n[2018/08/20 06:20:32 AM] [INFO] lo is not an active nic\\n[2018/08/20 06:20:32 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/08/20 06:20:32 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/08/20 06:20:32 AM] [INFO] nic3 mapped to: eth2\\n[2018/08/20 06:20:32 AM] [INFO] nic2 mapped to: eth1\\n[2018/08/20 06:20:32 AM] [INFO] nic1 mapped to: eth0\\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth0\\n[2018/08/20 06:20:32 AM] [INFO] adding custom route for interface: eth0\\n[2018/08/20 06:20:32 AM] [INFO] adding bridge: br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth1\\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan20\\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan30\\n[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan50\\n[2018/08/20 06:20:32 AM] [INFO] adding interface: eth2\\n[2018/08/20 06:20:32 AM] [INFO] applying network configs...\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth2\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth1\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth0\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50\\n[2018/08/20 06:20:32 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/08/20 06:20:32 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/08/20 06:20:32 AM] [INFO] running ifup on interface: eth2\\n[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth1\\n[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth0\\n[2018/08/20 06:20:37 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:20:41 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:46 AM] [INFO] running ifup on interface: vlan50\\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan20\\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan50\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:51,133] (heat-config) [DEBUG] [2018-08-20 06:20:31,616] (heat-config) [INFO] interface_name=nic1", > "[2018-08-20 06:20:31,616] (heat-config) [INFO] bridge_name=br-ex", > "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", > "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-5gsedhvmkf5g-0-h3rauqu6s7ri-NetworkDeployment-vh6p2ls7uv4a-TripleOSoftwareDeployment-odn42av35uxa/686bce03-99de-4aa7-86bc-84727d213aed", > "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:20:31,616] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:20:31,617] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/b938c369-7ada-4254-8a7c-9023b955fbf0", > "[2018-08-20 06:20:51,129] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-08-20 06:20:51,129] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.13/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.1.25/24\"}], \"type\": \"vlan\", \"vlan_id\": 20}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.28/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.2.19/24\"}], \"type\": \"vlan\", \"vlan_id\": 50}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}, {\"name\": \"nic3\", \"type\": \"interface\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/08/20 06:20:32 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/08/20 06:20:32 AM] [INFO] Ifcfg net config provider created.", > "[2018/08/20 06:20:32 AM] [INFO] Not using any mapping file.", > "[2018/08/20 06:20:32 AM] [INFO] Finding active nics", > "[2018/08/20 06:20:32 AM] [INFO] eth1 is an embedded active nic", > "[2018/08/20 06:20:32 AM] [INFO] eth0 is an embedded active nic", > "[2018/08/20 06:20:32 AM] [INFO] eth2 is an embedded active nic", > "[2018/08/20 06:20:32 AM] [INFO] lo is not an active nic", > "[2018/08/20 06:20:32 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/08/20 06:20:32 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/08/20 06:20:32 AM] [INFO] nic3 mapped to: eth2", > "[2018/08/20 06:20:32 AM] [INFO] nic2 mapped to: eth1", > "[2018/08/20 06:20:32 AM] [INFO] nic1 mapped to: eth0", > "[2018/08/20 06:20:32 AM] [INFO] adding interface: eth0", > "[2018/08/20 06:20:32 AM] [INFO] adding custom route for interface: eth0", > "[2018/08/20 06:20:32 AM] [INFO] adding bridge: br-isolated", > "[2018/08/20 06:20:32 AM] [INFO] adding interface: eth1", > "[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan20", > "[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan30", > "[2018/08/20 06:20:32 AM] [INFO] adding vlan: vlan50", > "[2018/08/20 06:20:32 AM] [INFO] adding interface: eth2", > "[2018/08/20 06:20:32 AM] [INFO] applying network configs...", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth2", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth1", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: eth0", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan20", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan30", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on interface: vlan50", > "[2018/08/20 06:20:32 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan50", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan20", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan20", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan50", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan20", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth2", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan50", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth2", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth2", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/08/20 06:20:32 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/08/20 06:20:32 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/08/20 06:20:32 AM] [INFO] running ifup on interface: eth2", > "[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth1", > "[2018/08/20 06:20:33 AM] [INFO] running ifup on interface: eth0", > "[2018/08/20 06:20:37 AM] [INFO] running ifup on interface: vlan20", > "[2018/08/20 06:20:41 AM] [INFO] running ifup on interface: vlan30", > "[2018/08/20 06:20:46 AM] [INFO] running ifup on interface: vlan50", > "[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan20", > "[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan30", > "[2018/08/20 06:20:50 AM] [INFO] running ifup on interface: vlan50", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-08-20 06:20:51,129] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/b938c369-7ada-4254-8a7c-9023b955fbf0", > "", > "[2018-08-20 06:20:51,133] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:51,134] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.json < /var/lib/heat-config/deployed/b938c369-7ada-4254-8a7c-9023b955fbf0.notify.json", > "[2018-08-20 06:20:51,565] (heat-config) [INFO] ", > "[2018-08-20 06:20:51,565] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:51,802 p=1013 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-08-20 06:20:51,802 p=1013 u=mistral | Monday 20 August 2018 06:20:51 -0400 (0:00:00.173) 0:01:33.832 ********* >2018-08-20 06:20:51,819 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:51,839 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:51,839 p=1013 u=mistral | Monday 20 August 2018 06:20:51 -0400 (0:00:00.037) 0:01:33.869 ********* >2018-08-20 06:20:51,945 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1"}, "changed": false} >2018-08-20 06:20:51,965 p=1013 u=mistral | TASK [Render deployment file for NovaComputeUpgradeInitDeployment] ************* >2018-08-20 06:20:51,966 p=1013 u=mistral | Monday 20 August 2018 06:20:51 -0400 (0:00:00.126) 0:01:33.996 ********* >2018-08-20 06:20:52,510 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "f90a4784368615acfe37773415a84727d284896f", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeUpgradeInitDeployment-cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1", "gid": 0, "group": "root", "md5sum": "dfe9b613aa23098f75ddb362a34c9ccf", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1182, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760452.06-29571676368040/source", "state": "file", "uid": 0} >2018-08-20 06:20:52,533 p=1013 u=mistral | TASK [Check if deployed file exists for NovaComputeUpgradeInitDeployment] ****** >2018-08-20 06:20:52,533 p=1013 u=mistral | Monday 20 August 2018 06:20:52 -0400 (0:00:00.567) 0:01:34.563 ********* >2018-08-20 06:20:52,778 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:52,798 p=1013 u=mistral | TASK [Check previous deployment rc for NovaComputeUpgradeInitDeployment] ******* >2018-08-20 06:20:52,798 p=1013 u=mistral | Monday 20 August 2018 06:20:52 -0400 (0:00:00.265) 0:01:34.828 ********* >2018-08-20 06:20:52,818 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:52,837 p=1013 u=mistral | TASK [Remove deployed file for NovaComputeUpgradeInitDeployment when previous deployment failed] *** >2018-08-20 06:20:52,837 p=1013 u=mistral | Monday 20 August 2018 06:20:52 -0400 (0:00:00.038) 0:01:34.867 ********* >2018-08-20 06:20:52,856 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:52,876 p=1013 u=mistral | TASK [Force remove deployed file for NovaComputeUpgradeInitDeployment] ********* >2018-08-20 06:20:52,876 p=1013 u=mistral | Monday 20 August 2018 06:20:52 -0400 (0:00:00.039) 0:01:34.906 ********* >2018-08-20 06:20:52,892 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:52,911 p=1013 u=mistral | TASK [Run deployment NovaComputeUpgradeInitDeployment] ************************* >2018-08-20 06:20:52,911 p=1013 u=mistral | Monday 20 August 2018 06:20:52 -0400 (0:00:00.034) 0:01:34.941 ********* >2018-08-20 06:20:53,595 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.notify.json)", "delta": "0:00:00.427680", "end": "2018-08-20 06:20:53.569110", "rc": 0, "start": "2018-08-20 06:20:53.141430", "stderr": "[2018-08-20 06:20:53,167] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.json\n[2018-08-20 06:20:53,198] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:53,198] (heat-config) [DEBUG] [2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3\n[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-5gsedhvmkf5g-0-h3rauqu6s7ri-NovaComputeUpgradeInitDeployment-pgcwcx2vu4ub/93db91ce-bcd5-4857-83b7-c8ac669a7f7d\n[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:20:53,190] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1\n[2018-08-20 06:20:53,194] (heat-config) [INFO] \n[2018-08-20 06:20:53,194] (heat-config) [DEBUG] \n[2018-08-20 06:20:53,194] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1\n\n[2018-08-20 06:20:53,198] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:53,198] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.json < /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.notify.json\n[2018-08-20 06:20:53,562] (heat-config) [INFO] \n[2018-08-20 06:20:53,563] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:53,167] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.json", "[2018-08-20 06:20:53,198] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:53,198] (heat-config) [DEBUG] [2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", "[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-5gsedhvmkf5g-0-h3rauqu6s7ri-NovaComputeUpgradeInitDeployment-pgcwcx2vu4ub/93db91ce-bcd5-4857-83b7-c8ac669a7f7d", "[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:20:53,190] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1", "[2018-08-20 06:20:53,194] (heat-config) [INFO] ", "[2018-08-20 06:20:53,194] (heat-config) [DEBUG] ", "[2018-08-20 06:20:53,194] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1", "", "[2018-08-20 06:20:53,198] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:53,198] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.json < /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.notify.json", "[2018-08-20 06:20:53,562] (heat-config) [INFO] ", "[2018-08-20 06:20:53,563] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:53,616 p=1013 u=mistral | TASK [Output for NovaComputeUpgradeInitDeployment] ***************************** >2018-08-20 06:20:53,617 p=1013 u=mistral | Monday 20 August 2018 06:20:53 -0400 (0:00:00.705) 0:01:35.647 ********* >2018-08-20 06:20:53,723 p=1013 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:53,167] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.json", > "[2018-08-20 06:20:53,198] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:53,198] (heat-config) [DEBUG] [2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", > "[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-5gsedhvmkf5g-0-h3rauqu6s7ri-NovaComputeUpgradeInitDeployment-pgcwcx2vu4ub/93db91ce-bcd5-4857-83b7-c8ac669a7f7d", > "[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:20:53,190] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:20:53,190] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1", > "[2018-08-20 06:20:53,194] (heat-config) [INFO] ", > "[2018-08-20 06:20:53,194] (heat-config) [DEBUG] ", > "[2018-08-20 06:20:53,194] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1", > "", > "[2018-08-20 06:20:53,198] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:53,198] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.json < /var/lib/heat-config/deployed/cdb5ca45-e19b-4ab2-b9e3-90dc9eb7bfa1.notify.json", > "[2018-08-20 06:20:53,562] (heat-config) [INFO] ", > "[2018-08-20 06:20:53,563] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:53,745 p=1013 u=mistral | TASK [Check-mode for Run deployment NovaComputeUpgradeInitDeployment] ********** >2018-08-20 06:20:53,745 p=1013 u=mistral | Monday 20 August 2018 06:20:53 -0400 (0:00:00.128) 0:01:35.775 ********* >2018-08-20 06:20:53,765 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:53,784 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:53,785 p=1013 u=mistral | Monday 20 August 2018 06:20:53 -0400 (0:00:00.039) 0:01:35.815 ********* >2018-08-20 06:20:53,892 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "354475e7-7361-402c-913c-29d4a45fe4bf"}, "changed": false} >2018-08-20 06:20:53,955 p=1013 u=mistral | TASK [Render deployment file for CADeployment] ********************************* >2018-08-20 06:20:53,956 p=1013 u=mistral | Monday 20 August 2018 06:20:53 -0400 (0:00:00.171) 0:01:35.986 ********* >2018-08-20 06:20:54,546 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "035e3833ae39842a7ccce3ef4610778a50dc05b7", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-354475e7-7361-402c-913c-29d4a45fe4bf", "gid": 0, "group": "root", "md5sum": "4c383be1dc0672bff105070f289a17b6", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2996, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760454.03-229383241365711/source", "state": "file", "uid": 0} >2018-08-20 06:20:54,566 p=1013 u=mistral | TASK [Check if deployed file exists for CADeployment] ************************** >2018-08-20 06:20:54,566 p=1013 u=mistral | Monday 20 August 2018 06:20:54 -0400 (0:00:00.610) 0:01:36.596 ********* >2018-08-20 06:20:54,759 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:54,780 p=1013 u=mistral | TASK [Check previous deployment rc for CADeployment] *************************** >2018-08-20 06:20:54,780 p=1013 u=mistral | Monday 20 August 2018 06:20:54 -0400 (0:00:00.213) 0:01:36.810 ********* >2018-08-20 06:20:54,797 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:54,816 p=1013 u=mistral | TASK [Remove deployed file for CADeployment when previous deployment failed] *** >2018-08-20 06:20:54,816 p=1013 u=mistral | Monday 20 August 2018 06:20:54 -0400 (0:00:00.036) 0:01:36.846 ********* >2018-08-20 06:20:54,833 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:54,852 p=1013 u=mistral | TASK [Force remove deployed file for CADeployment] ***************************** >2018-08-20 06:20:54,853 p=1013 u=mistral | Monday 20 August 2018 06:20:54 -0400 (0:00:00.036) 0:01:36.883 ********* >2018-08-20 06:20:54,870 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:54,892 p=1013 u=mistral | TASK [Run deployment CADeployment] ********************************************* >2018-08-20 06:20:54,892 p=1013 u=mistral | Monday 20 August 2018 06:20:54 -0400 (0:00:00.039) 0:01:36.922 ********* >2018-08-20 06:20:56,119 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.notify.json)", "delta": "0:00:01.028481", "end": "2018-08-20 06:20:56.092883", "rc": 0, "start": "2018-08-20 06:20:55.064402", "stderr": "[2018-08-20 06:20:55,090] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.json\n[2018-08-20 06:20:55,751] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"2584ba658ccddd60c9694324a8547fbd /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-08-20 06:20:55,751] (heat-config) [DEBUG] [2018-08-20 06:20:55,112] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-08-20 06:20:55,112] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJAKeXPqIlS80rMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODA4MjAwOTEyMjZaFw0xOTA4MjAwOTEyMjZaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOHsMZOBfdYsz5QF5FJB9EEJUBx5O+mX/iq6tWmkU/uK\nwJo7/7YK+QHvZyTLjGOuhLDH3gkfQ/aaDHlSG5EhLpHTkIGc8c0ABCEfmTlntjq4\nqiz+rpUUelvbM+EW8gZeIecXyf1p0Kwh8mE5jfyB4Gbf/+oeJmwaqmoWJzh2jmNy\ndP7fYpSmu3ZxbTwKT2NaIO+NLWrdRMrtMxlOHKwRZ06FgZ+mlT1RTYh3ebd+MbQg\nzsdYMQ2DTrS8panpYi2Z3Sysb+TanpRTsmRwRXncwdvufjvk5DJP+8Gzq2UP/VQB\nNfHQwIdmrcxI+d4fc3yELvypO7Qui6HWltItoeRfNX8CAwEAAaNQME4wHQYDVR0O\nBBYEFPjqPbuloOP/sUg/EHuGKkE6NgKQMB8GA1UdIwQYMBaAFPjqPbuloOP/sUg/\nEHuGKkE6NgKQMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKJh1k/0\nVC0HgmXjiiFF0HAZ5GYXA2QmD8HM8GOOBxVU26BL7a7TiY57l4MMVSx5ToIvEt0H\nvCkhdZIlv5EdlRfaAzTJ/TnrEq8DDslUPi4oskrHBb5pG2VEtFrXICMPEdHx9fxh\nxxYwkEMeIwoKqvFbDHy/xUQlJ8683HINYEqtLFEWTAvCICEi3vla4NXx08Qw5pTQ\nls8Tv/heAbREztkAcLClwV0qDpSpJDZGF5P6NoKz1+0cdOdZFykO2ncDjqi1S7HP\njeIi6AGdsRZW+Vm+p5WnRjTk/0glo2WDhxSLjbhI2Yr3EqB6Lyct3ZTMJIVZrIGl\nNj+B6Q2NoXe/7ws=\n-----END CERTIFICATE-----\n[2018-08-20 06:20:55,113] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3\n[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-5gsedhvmkf5g-0-h3rauqu6s7ri-NodeTLSCAData-7umgtcsgnmje-CADeployment-xpwrybxfxgr7/7e64fb7c-a32c-4f51-b045-24e0574fc761\n[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:20:55,113] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/354475e7-7361-402c-913c-29d4a45fe4bf\n[2018-08-20 06:20:55,747] (heat-config) [INFO] \n[2018-08-20 06:20:55,748] (heat-config) [DEBUG] \n[2018-08-20 06:20:55,748] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/354475e7-7361-402c-913c-29d4a45fe4bf\n\n[2018-08-20 06:20:55,751] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:55,752] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.json < /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.notify.json\n[2018-08-20 06:20:56,087] (heat-config) [INFO] \n[2018-08-20 06:20:56,087] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:55,090] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.json", "[2018-08-20 06:20:55,751] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"2584ba658ccddd60c9694324a8547fbd /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-08-20 06:20:55,751] (heat-config) [DEBUG] [2018-08-20 06:20:55,112] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-08-20 06:20:55,112] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJAKeXPqIlS80rMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODA4MjAwOTEyMjZaFw0xOTA4MjAwOTEyMjZaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAOHsMZOBfdYsz5QF5FJB9EEJUBx5O+mX/iq6tWmkU/uK", "wJo7/7YK+QHvZyTLjGOuhLDH3gkfQ/aaDHlSG5EhLpHTkIGc8c0ABCEfmTlntjq4", "qiz+rpUUelvbM+EW8gZeIecXyf1p0Kwh8mE5jfyB4Gbf/+oeJmwaqmoWJzh2jmNy", "dP7fYpSmu3ZxbTwKT2NaIO+NLWrdRMrtMxlOHKwRZ06FgZ+mlT1RTYh3ebd+MbQg", "zsdYMQ2DTrS8panpYi2Z3Sysb+TanpRTsmRwRXncwdvufjvk5DJP+8Gzq2UP/VQB", "NfHQwIdmrcxI+d4fc3yELvypO7Qui6HWltItoeRfNX8CAwEAAaNQME4wHQYDVR0O", "BBYEFPjqPbuloOP/sUg/EHuGKkE6NgKQMB8GA1UdIwQYMBaAFPjqPbuloOP/sUg/", "EHuGKkE6NgKQMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKJh1k/0", "VC0HgmXjiiFF0HAZ5GYXA2QmD8HM8GOOBxVU26BL7a7TiY57l4MMVSx5ToIvEt0H", "vCkhdZIlv5EdlRfaAzTJ/TnrEq8DDslUPi4oskrHBb5pG2VEtFrXICMPEdHx9fxh", "xxYwkEMeIwoKqvFbDHy/xUQlJ8683HINYEqtLFEWTAvCICEi3vla4NXx08Qw5pTQ", "ls8Tv/heAbREztkAcLClwV0qDpSpJDZGF5P6NoKz1+0cdOdZFykO2ncDjqi1S7HP", "jeIi6AGdsRZW+Vm+p5WnRjTk/0glo2WDhxSLjbhI2Yr3EqB6Lyct3ZTMJIVZrIGl", "Nj+B6Q2NoXe/7ws=", "-----END CERTIFICATE-----", "[2018-08-20 06:20:55,113] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-5gsedhvmkf5g-0-h3rauqu6s7ri-NodeTLSCAData-7umgtcsgnmje-CADeployment-xpwrybxfxgr7/7e64fb7c-a32c-4f51-b045-24e0574fc761", "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:20:55,113] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/354475e7-7361-402c-913c-29d4a45fe4bf", "[2018-08-20 06:20:55,747] (heat-config) [INFO] ", "[2018-08-20 06:20:55,748] (heat-config) [DEBUG] ", "[2018-08-20 06:20:55,748] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/354475e7-7361-402c-913c-29d4a45fe4bf", "", "[2018-08-20 06:20:55,751] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:55,752] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.json < /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.notify.json", "[2018-08-20 06:20:56,087] (heat-config) [INFO] ", "[2018-08-20 06:20:56,087] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:56,146 p=1013 u=mistral | TASK [Output for CADeployment] ************************************************* >2018-08-20 06:20:56,146 p=1013 u=mistral | Monday 20 August 2018 06:20:56 -0400 (0:00:01.254) 0:01:38.176 ********* >2018-08-20 06:20:56,210 p=1013 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:55,090] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.json", > "[2018-08-20 06:20:55,751] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"2584ba658ccddd60c9694324a8547fbd /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-08-20 06:20:55,751] (heat-config) [DEBUG] [2018-08-20 06:20:55,112] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-08-20 06:20:55,112] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJAKeXPqIlS80rMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODA4MjAwOTEyMjZaFw0xOTA4MjAwOTEyMjZaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAOHsMZOBfdYsz5QF5FJB9EEJUBx5O+mX/iq6tWmkU/uK", > "wJo7/7YK+QHvZyTLjGOuhLDH3gkfQ/aaDHlSG5EhLpHTkIGc8c0ABCEfmTlntjq4", > "qiz+rpUUelvbM+EW8gZeIecXyf1p0Kwh8mE5jfyB4Gbf/+oeJmwaqmoWJzh2jmNy", > "dP7fYpSmu3ZxbTwKT2NaIO+NLWrdRMrtMxlOHKwRZ06FgZ+mlT1RTYh3ebd+MbQg", > "zsdYMQ2DTrS8panpYi2Z3Sysb+TanpRTsmRwRXncwdvufjvk5DJP+8Gzq2UP/VQB", > "NfHQwIdmrcxI+d4fc3yELvypO7Qui6HWltItoeRfNX8CAwEAAaNQME4wHQYDVR0O", > "BBYEFPjqPbuloOP/sUg/EHuGKkE6NgKQMB8GA1UdIwQYMBaAFPjqPbuloOP/sUg/", > "EHuGKkE6NgKQMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKJh1k/0", > "VC0HgmXjiiFF0HAZ5GYXA2QmD8HM8GOOBxVU26BL7a7TiY57l4MMVSx5ToIvEt0H", > "vCkhdZIlv5EdlRfaAzTJ/TnrEq8DDslUPi4oskrHBb5pG2VEtFrXICMPEdHx9fxh", > "xxYwkEMeIwoKqvFbDHy/xUQlJ8683HINYEqtLFEWTAvCICEi3vla4NXx08Qw5pTQ", > "ls8Tv/heAbREztkAcLClwV0qDpSpJDZGF5P6NoKz1+0cdOdZFykO2ncDjqi1S7HP", > "jeIi6AGdsRZW+Vm+p5WnRjTk/0glo2WDhxSLjbhI2Yr3EqB6Lyct3ZTMJIVZrIGl", > "Nj+B6Q2NoXe/7ws=", > "-----END CERTIFICATE-----", > "[2018-08-20 06:20:55,113] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", > "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-5gsedhvmkf5g-0-h3rauqu6s7ri-NodeTLSCAData-7umgtcsgnmje-CADeployment-xpwrybxfxgr7/7e64fb7c-a32c-4f51-b045-24e0574fc761", > "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:20:55,113] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:20:55,113] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/354475e7-7361-402c-913c-29d4a45fe4bf", > "[2018-08-20 06:20:55,747] (heat-config) [INFO] ", > "[2018-08-20 06:20:55,748] (heat-config) [DEBUG] ", > "[2018-08-20 06:20:55,748] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/354475e7-7361-402c-913c-29d4a45fe4bf", > "", > "[2018-08-20 06:20:55,751] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:55,752] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.json < /var/lib/heat-config/deployed/354475e7-7361-402c-913c-29d4a45fe4bf.notify.json", > "[2018-08-20 06:20:56,087] (heat-config) [INFO] ", > "[2018-08-20 06:20:56,087] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:56,234 p=1013 u=mistral | TASK [Check-mode for Run deployment CADeployment] ****************************** >2018-08-20 06:20:56,234 p=1013 u=mistral | Monday 20 August 2018 06:20:56 -0400 (0:00:00.088) 0:01:38.264 ********* >2018-08-20 06:20:56,253 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:56,273 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:56,273 p=1013 u=mistral | Monday 20 August 2018 06:20:56 -0400 (0:00:00.038) 0:01:38.303 ********* >2018-08-20 06:20:56,424 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "4c3ce1f6-fd22-4460-9821-56c92b744f20"}, "changed": false} >2018-08-20 06:20:56,446 p=1013 u=mistral | TASK [Render deployment file for NovaComputeDeployment] ************************ >2018-08-20 06:20:56,447 p=1013 u=mistral | Monday 20 August 2018 06:20:56 -0400 (0:00:00.173) 0:01:38.477 ********* >2018-08-20 06:20:57,065 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "cebfbb74da440b6ecc3a8e1623e9e5ddc110bfae", "dest": "/var/lib/heat-config/tripleo-config-download/NovaComputeDeployment-4c3ce1f6-fd22-4460-9821-56c92b744f20", "gid": 0, "group": "root", "md5sum": "bc5453fe0594eb0b72a728283ca78869", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 21902, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760456.61-153713384379655/source", "state": "file", "uid": 0} >2018-08-20 06:20:57,085 p=1013 u=mistral | TASK [Check if deployed file exists for NovaComputeDeployment] ***************** >2018-08-20 06:20:57,085 p=1013 u=mistral | Monday 20 August 2018 06:20:57 -0400 (0:00:00.638) 0:01:39.115 ********* >2018-08-20 06:20:57,270 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:57,292 p=1013 u=mistral | TASK [Check previous deployment rc for NovaComputeDeployment] ****************** >2018-08-20 06:20:57,292 p=1013 u=mistral | Monday 20 August 2018 06:20:57 -0400 (0:00:00.207) 0:01:39.322 ********* >2018-08-20 06:20:57,311 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:57,333 p=1013 u=mistral | TASK [Remove deployed file for NovaComputeDeployment when previous deployment failed] *** >2018-08-20 06:20:57,333 p=1013 u=mistral | Monday 20 August 2018 06:20:57 -0400 (0:00:00.041) 0:01:39.363 ********* >2018-08-20 06:20:57,362 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:57,382 p=1013 u=mistral | TASK [Force remove deployed file for NovaComputeDeployment] ******************** >2018-08-20 06:20:57,382 p=1013 u=mistral | Monday 20 August 2018 06:20:57 -0400 (0:00:00.049) 0:01:39.412 ********* >2018-08-20 06:20:57,402 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:57,424 p=1013 u=mistral | TASK [Run deployment NovaComputeDeployment] ************************************ >2018-08-20 06:20:57,424 p=1013 u=mistral | Monday 20 August 2018 06:20:57 -0400 (0:00:00.041) 0:01:39.454 ********* >2018-08-20 06:20:58,101 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.notify.json)", "delta": "0:00:00.484150", "end": "2018-08-20 06:20:58.080829", "rc": 0, "start": "2018-08-20 06:20:57.596679", "stderr": "[2018-08-20 06:20:57,623] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.json\n[2018-08-20 06:20:57,736] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:57,736] (heat-config) [DEBUG] \n[2018-08-20 06:20:57,736] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-08-20 06:20:57,737] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.json < /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.notify.json\n[2018-08-20 06:20:58,074] (heat-config) [INFO] \n[2018-08-20 06:20:58,075] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:57,623] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.json", "[2018-08-20 06:20:57,736] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:57,736] (heat-config) [DEBUG] ", "[2018-08-20 06:20:57,736] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-08-20 06:20:57,737] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.json < /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.notify.json", "[2018-08-20 06:20:58,074] (heat-config) [INFO] ", "[2018-08-20 06:20:58,075] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:58,122 p=1013 u=mistral | TASK [Output for NovaComputeDeployment] **************************************** >2018-08-20 06:20:58,122 p=1013 u=mistral | Monday 20 August 2018 06:20:58 -0400 (0:00:00.697) 0:01:40.152 ********* >2018-08-20 06:20:58,178 p=1013 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:57,623] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.json", > "[2018-08-20 06:20:57,736] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:57,736] (heat-config) [DEBUG] ", > "[2018-08-20 06:20:57,736] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-08-20 06:20:57,737] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.json < /var/lib/heat-config/deployed/4c3ce1f6-fd22-4460-9821-56c92b744f20.notify.json", > "[2018-08-20 06:20:58,074] (heat-config) [INFO] ", > "[2018-08-20 06:20:58,075] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:58,200 p=1013 u=mistral | TASK [Check-mode for Run deployment NovaComputeDeployment] ********************* >2018-08-20 06:20:58,200 p=1013 u=mistral | Monday 20 August 2018 06:20:58 -0400 (0:00:00.077) 0:01:40.230 ********* >2018-08-20 06:20:58,216 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:58,236 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:20:58,237 p=1013 u=mistral | Monday 20 August 2018 06:20:58 -0400 (0:00:00.036) 0:01:40.266 ********* >2018-08-20 06:20:58,298 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "409819ca-eafe-4d5e-bc7a-98d1dbf08eb1"}, "changed": false} >2018-08-20 06:20:58,317 p=1013 u=mistral | TASK [Render deployment file for ComputeHostsDeployment] *********************** >2018-08-20 06:20:58,318 p=1013 u=mistral | Monday 20 August 2018 06:20:58 -0400 (0:00:00.081) 0:01:40.348 ********* >2018-08-20 06:20:58,831 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "3312e972032b82d23089952604c09784463a249c", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostsDeployment-409819ca-eafe-4d5e-bc7a-98d1dbf08eb1", "gid": 0, "group": "root", "md5sum": "8883cb734093a1ca9deea244f302b1c3", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4429, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760458.37-135282876846619/source", "state": "file", "uid": 0} >2018-08-20 06:20:58,850 p=1013 u=mistral | TASK [Check if deployed file exists for ComputeHostsDeployment] **************** >2018-08-20 06:20:58,851 p=1013 u=mistral | Monday 20 August 2018 06:20:58 -0400 (0:00:00.532) 0:01:40.880 ********* >2018-08-20 06:20:59,036 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:20:59,058 p=1013 u=mistral | TASK [Check previous deployment rc for ComputeHostsDeployment] ***************** >2018-08-20 06:20:59,058 p=1013 u=mistral | Monday 20 August 2018 06:20:59 -0400 (0:00:00.207) 0:01:41.088 ********* >2018-08-20 06:20:59,076 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:59,095 p=1013 u=mistral | TASK [Remove deployed file for ComputeHostsDeployment when previous deployment failed] *** >2018-08-20 06:20:59,095 p=1013 u=mistral | Monday 20 August 2018 06:20:59 -0400 (0:00:00.037) 0:01:41.125 ********* >2018-08-20 06:20:59,112 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:59,131 p=1013 u=mistral | TASK [Force remove deployed file for ComputeHostsDeployment] ******************* >2018-08-20 06:20:59,131 p=1013 u=mistral | Monday 20 August 2018 06:20:59 -0400 (0:00:00.036) 0:01:41.161 ********* >2018-08-20 06:20:59,148 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:20:59,166 p=1013 u=mistral | TASK [Run deployment ComputeHostsDeployment] *********************************** >2018-08-20 06:20:59,166 p=1013 u=mistral | Monday 20 August 2018 06:20:59 -0400 (0:00:00.034) 0:01:41.196 ********* >2018-08-20 06:20:59,850 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.notify.json)", "delta": "0:00:00.461800", "end": "2018-08-20 06:20:59.801453", "rc": 0, "start": "2018-08-20 06:20:59.339653", "stderr": "[2018-08-20 06:20:59,367] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.json\n[2018-08-20 06:20:59,420] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-08-20 06:20:59,420] (heat-config) [DEBUG] [2018-08-20 06:20:59,390] (heat-config) [INFO] hosts=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3\n[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-ae5bivewx723-0-bbajdl5xoenw/8fcf1214-ab6f-49bb-b9c6-a275c4402e25\n[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:20:59,390] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1\n[2018-08-20 06:20:59,416] (heat-config) [INFO] \n[2018-08-20 06:20:59,416] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /compute-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-08-20 06:20:59,416] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1\n\n[2018-08-20 06:20:59,420] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:20:59,421] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.json < /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.notify.json\n[2018-08-20 06:20:59,795] (heat-config) [INFO] \n[2018-08-20 06:20:59,795] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:20:59,367] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.json", "[2018-08-20 06:20:59,420] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-08-20 06:20:59,420] (heat-config) [DEBUG] [2018-08-20 06:20:59,390] (heat-config) [INFO] hosts=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-ae5bivewx723-0-bbajdl5xoenw/8fcf1214-ab6f-49bb-b9c6-a275c4402e25", "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:20:59,390] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1", "[2018-08-20 06:20:59,416] (heat-config) [INFO] ", "[2018-08-20 06:20:59,416] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /compute-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-08-20 06:20:59,416] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1", "", "[2018-08-20 06:20:59,420] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:20:59,421] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.json < /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.notify.json", "[2018-08-20 06:20:59,795] (heat-config) [INFO] ", "[2018-08-20 06:20:59,795] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:20:59,883 p=1013 u=mistral | TASK [Output for ComputeHostsDeployment] *************************************** >2018-08-20 06:20:59,883 p=1013 u=mistral | Monday 20 August 2018 06:20:59 -0400 (0:00:00.716) 0:01:41.913 ********* >2018-08-20 06:20:59,961 p=1013 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:20:59,367] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.json", > "[2018-08-20 06:20:59,420] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /compute-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-08-20 06:20:59,420] (heat-config) [DEBUG] [2018-08-20 06:20:59,390] (heat-config) [INFO] hosts=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", > "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeHostsDeployment-ae5bivewx723-0-bbajdl5xoenw/8fcf1214-ab6f-49bb-b9c6-a275c4402e25", > "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:20:59,390] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:20:59,390] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1", > "[2018-08-20 06:20:59,416] (heat-config) [INFO] ", > "[2018-08-20 06:20:59,416] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /compute-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-08-20 06:20:59,416] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1", > "", > "[2018-08-20 06:20:59,420] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:20:59,421] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.json < /var/lib/heat-config/deployed/409819ca-eafe-4d5e-bc7a-98d1dbf08eb1.notify.json", > "[2018-08-20 06:20:59,795] (heat-config) [INFO] ", > "[2018-08-20 06:20:59,795] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:20:59,995 p=1013 u=mistral | TASK [Check-mode for Run deployment ComputeHostsDeployment] ******************** >2018-08-20 06:20:59,995 p=1013 u=mistral | Monday 20 August 2018 06:20:59 -0400 (0:00:00.112) 0:01:42.025 ********* >2018-08-20 06:21:00,012 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:00,029 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:00,030 p=1013 u=mistral | Monday 20 August 2018 06:21:00 -0400 (0:00:00.034) 0:01:42.059 ********* >2018-08-20 06:21:00,171 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "5f95f5a3-4413-4bd7-9e03-2e90fa1e9860"}, "changed": false} >2018-08-20 06:21:00,191 p=1013 u=mistral | TASK [Render deployment file for ComputeAllNodesDeployment] ******************** >2018-08-20 06:21:00,192 p=1013 u=mistral | Monday 20 August 2018 06:21:00 -0400 (0:00:00.162) 0:01:42.222 ********* >2018-08-20 06:21:00,839 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "4ca7e47a3682c78be2a8a7c6a6a3baea023a1ad4", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesDeployment-5f95f5a3-4413-4bd7-9e03-2e90fa1e9860", "gid": 0, "group": "root", "md5sum": "9dcfa0655df4890b15e1ffe8401ce6a6", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19157, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760460.34-41104388771790/source", "state": "file", "uid": 0} >2018-08-20 06:21:00,875 p=1013 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesDeployment] ************* >2018-08-20 06:21:00,875 p=1013 u=mistral | Monday 20 August 2018 06:21:00 -0400 (0:00:00.683) 0:01:42.905 ********* >2018-08-20 06:21:01,070 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:01,090 p=1013 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesDeployment] ************** >2018-08-20 06:21:01,090 p=1013 u=mistral | Monday 20 August 2018 06:21:01 -0400 (0:00:00.214) 0:01:43.120 ********* >2018-08-20 06:21:01,106 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:01,125 p=1013 u=mistral | TASK [Remove deployed file for ComputeAllNodesDeployment when previous deployment failed] *** >2018-08-20 06:21:01,126 p=1013 u=mistral | Monday 20 August 2018 06:21:01 -0400 (0:00:00.035) 0:01:43.155 ********* >2018-08-20 06:21:01,142 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:01,162 p=1013 u=mistral | TASK [Force remove deployed file for ComputeAllNodesDeployment] **************** >2018-08-20 06:21:01,162 p=1013 u=mistral | Monday 20 August 2018 06:21:01 -0400 (0:00:00.036) 0:01:43.192 ********* >2018-08-20 06:21:01,179 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:01,198 p=1013 u=mistral | TASK [Run deployment ComputeAllNodesDeployment] ******************************** >2018-08-20 06:21:01,198 p=1013 u=mistral | Monday 20 August 2018 06:21:01 -0400 (0:00:00.035) 0:01:43.228 ********* >2018-08-20 06:21:01,923 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.notify.json)", "delta": "0:00:00.540986", "end": "2018-08-20 06:21:01.903018", "rc": 0, "start": "2018-08-20 06:21:01.362032", "stderr": "[2018-08-20 06:21:01,389] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.json\n[2018-08-20 06:21:01,509] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:01,509] (heat-config) [DEBUG] \n[2018-08-20 06:21:01,509] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-08-20 06:21:01,509] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.json < /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.notify.json\n[2018-08-20 06:21:01,896] (heat-config) [INFO] \n[2018-08-20 06:21:01,896] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:01,389] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.json", "[2018-08-20 06:21:01,509] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:01,509] (heat-config) [DEBUG] ", "[2018-08-20 06:21:01,509] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-08-20 06:21:01,509] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.json < /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.notify.json", "[2018-08-20 06:21:01,896] (heat-config) [INFO] ", "[2018-08-20 06:21:01,896] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:01,942 p=1013 u=mistral | TASK [Output for ComputeAllNodesDeployment] ************************************ >2018-08-20 06:21:01,943 p=1013 u=mistral | Monday 20 August 2018 06:21:01 -0400 (0:00:00.744) 0:01:43.973 ********* >2018-08-20 06:21:01,990 p=1013 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:01,389] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.json", > "[2018-08-20 06:21:01,509] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:01,509] (heat-config) [DEBUG] ", > "[2018-08-20 06:21:01,509] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-08-20 06:21:01,509] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.json < /var/lib/heat-config/deployed/5f95f5a3-4413-4bd7-9e03-2e90fa1e9860.notify.json", > "[2018-08-20 06:21:01,896] (heat-config) [INFO] ", > "[2018-08-20 06:21:01,896] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:02,010 p=1013 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesDeployment] ***************** >2018-08-20 06:21:02,010 p=1013 u=mistral | Monday 20 August 2018 06:21:02 -0400 (0:00:00.067) 0:01:44.040 ********* >2018-08-20 06:21:02,025 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:02,042 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:02,043 p=1013 u=mistral | Monday 20 August 2018 06:21:02 -0400 (0:00:00.032) 0:01:44.073 ********* >2018-08-20 06:21:02,098 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "c94a2cd5-1626-4b20-9e31-3e584b0c8539"}, "changed": false} >2018-08-20 06:21:02,117 p=1013 u=mistral | TASK [Render deployment file for ComputeAllNodesValidationDeployment] ********** >2018-08-20 06:21:02,117 p=1013 u=mistral | Monday 20 August 2018 06:21:02 -0400 (0:00:00.074) 0:01:44.147 ********* >2018-08-20 06:21:02,663 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "b5577558a61c8ae7ae1c6ad1d0d7d6c0467c0e14", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeAllNodesValidationDeployment-c94a2cd5-1626-4b20-9e31-3e584b0c8539", "gid": 0, "group": "root", "md5sum": "7d19b19a53ee62f8f72e70999868c605", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4935, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760462.18-172922193376868/source", "state": "file", "uid": 0} >2018-08-20 06:21:02,686 p=1013 u=mistral | TASK [Check if deployed file exists for ComputeAllNodesValidationDeployment] *** >2018-08-20 06:21:02,687 p=1013 u=mistral | Monday 20 August 2018 06:21:02 -0400 (0:00:00.569) 0:01:44.717 ********* >2018-08-20 06:21:02,876 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:02,897 p=1013 u=mistral | TASK [Check previous deployment rc for ComputeAllNodesValidationDeployment] **** >2018-08-20 06:21:02,898 p=1013 u=mistral | Monday 20 August 2018 06:21:02 -0400 (0:00:00.211) 0:01:44.928 ********* >2018-08-20 06:21:02,916 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:02,935 p=1013 u=mistral | TASK [Remove deployed file for ComputeAllNodesValidationDeployment when previous deployment failed] *** >2018-08-20 06:21:02,935 p=1013 u=mistral | Monday 20 August 2018 06:21:02 -0400 (0:00:00.037) 0:01:44.965 ********* >2018-08-20 06:21:02,952 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:02,970 p=1013 u=mistral | TASK [Force remove deployed file for ComputeAllNodesValidationDeployment] ****** >2018-08-20 06:21:02,970 p=1013 u=mistral | Monday 20 August 2018 06:21:02 -0400 (0:00:00.035) 0:01:45.000 ********* >2018-08-20 06:21:02,990 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:03,013 p=1013 u=mistral | TASK [Run deployment ComputeAllNodesValidationDeployment] ********************** >2018-08-20 06:21:03,013 p=1013 u=mistral | Monday 20 August 2018 06:21:03 -0400 (0:00:00.042) 0:01:45.043 ********* >2018-08-20 06:21:04,182 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.notify.json)", "delta": "0:00:00.985085", "end": "2018-08-20 06:21:04.160544", "rc": 0, "start": "2018-08-20 06:21:03.175459", "stderr": "[2018-08-20 06:21:03,199] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.json\n[2018-08-20 06:21:03,767] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.26 for local network 172.17.2.0/24.\\nPing to 172.17.2.26 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\\nPing to 172.17.3.14 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:03,768] (heat-config) [DEBUG] [2018-08-20 06:21:03,220] (heat-config) [INFO] ping_test_ips=172.17.3.14 172.17.4.12 172.17.1.16 172.17.2.26 10.0.0.105 192.168.24.12\n[2018-08-20 06:21:03,220] (heat-config) [INFO] validate_fqdn=False\n[2018-08-20 06:21:03,220] (heat-config) [INFO] validate_ntp=True\n[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3\n[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-74f72ojrw5rn-0-irrtebjcn5iy/e702928b-1e4e-41b3-ad54-5c1b60dcac62\n[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:21:03,220] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c94a2cd5-1626-4b20-9e31-3e584b0c8539\n[2018-08-20 06:21:03,763] (heat-config) [INFO] Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\nPing to 172.17.1.16 succeeded.\nSUCCESS\nTrying to ping 172.17.2.26 for local network 172.17.2.0/24.\nPing to 172.17.2.26 succeeded.\nSUCCESS\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\nPing to 172.17.3.14 succeeded.\nSUCCESS\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\nPing to 192.168.24.12 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nSUCCESS\n\n[2018-08-20 06:21:03,763] (heat-config) [DEBUG] \n[2018-08-20 06:21:03,763] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c94a2cd5-1626-4b20-9e31-3e584b0c8539\n\n[2018-08-20 06:21:03,768] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:21:03,768] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.json < /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.notify.json\n[2018-08-20 06:21:04,154] (heat-config) [INFO] \n[2018-08-20 06:21:04,154] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:03,199] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.json", "[2018-08-20 06:21:03,767] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.26 for local network 172.17.2.0/24.\\nPing to 172.17.2.26 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\\nPing to 172.17.3.14 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:03,768] (heat-config) [DEBUG] [2018-08-20 06:21:03,220] (heat-config) [INFO] ping_test_ips=172.17.3.14 172.17.4.12 172.17.1.16 172.17.2.26 10.0.0.105 192.168.24.12", "[2018-08-20 06:21:03,220] (heat-config) [INFO] validate_fqdn=False", "[2018-08-20 06:21:03,220] (heat-config) [INFO] validate_ntp=True", "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-74f72ojrw5rn-0-irrtebjcn5iy/e702928b-1e4e-41b3-ad54-5c1b60dcac62", "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:21:03,220] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c94a2cd5-1626-4b20-9e31-3e584b0c8539", "[2018-08-20 06:21:03,763] (heat-config) [INFO] Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", "Ping to 172.17.1.16 succeeded.", "SUCCESS", "Trying to ping 172.17.2.26 for local network 172.17.2.0/24.", "Ping to 172.17.2.26 succeeded.", "SUCCESS", "Trying to ping 172.17.3.14 for local network 172.17.3.0/24.", "Ping to 172.17.3.14 succeeded.", "SUCCESS", "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", "Ping to 192.168.24.12 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "SUCCESS", "", "[2018-08-20 06:21:03,763] (heat-config) [DEBUG] ", "[2018-08-20 06:21:03,763] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c94a2cd5-1626-4b20-9e31-3e584b0c8539", "", "[2018-08-20 06:21:03,768] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:21:03,768] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.json < /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.notify.json", "[2018-08-20 06:21:04,154] (heat-config) [INFO] ", "[2018-08-20 06:21:04,154] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:04,207 p=1013 u=mistral | TASK [Output for ComputeAllNodesValidationDeployment] ************************** >2018-08-20 06:21:04,208 p=1013 u=mistral | Monday 20 August 2018 06:21:04 -0400 (0:00:01.194) 0:01:46.238 ********* >2018-08-20 06:21:04,265 p=1013 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:03,199] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.json", > "[2018-08-20 06:21:03,767] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 172.17.1.16 for local network 172.17.1.0/24.\\nPing to 172.17.1.16 succeeded.\\nSUCCESS\\nTrying to ping 172.17.2.26 for local network 172.17.2.0/24.\\nPing to 172.17.2.26 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\\nPing to 172.17.3.14 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:03,768] (heat-config) [DEBUG] [2018-08-20 06:21:03,220] (heat-config) [INFO] ping_test_ips=172.17.3.14 172.17.4.12 172.17.1.16 172.17.2.26 10.0.0.105 192.168.24.12", > "[2018-08-20 06:21:03,220] (heat-config) [INFO] validate_fqdn=False", > "[2018-08-20 06:21:03,220] (heat-config) [INFO] validate_ntp=True", > "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", > "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_stack_id=overcloud-ComputeAllNodesValidationDeployment-74f72ojrw5rn-0-irrtebjcn5iy/e702928b-1e4e-41b3-ad54-5c1b60dcac62", > "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:21:03,220] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:21:03,220] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c94a2cd5-1626-4b20-9e31-3e584b0c8539", > "[2018-08-20 06:21:03,763] (heat-config) [INFO] Trying to ping 172.17.1.16 for local network 172.17.1.0/24.", > "Ping to 172.17.1.16 succeeded.", > "SUCCESS", > "Trying to ping 172.17.2.26 for local network 172.17.2.0/24.", > "Ping to 172.17.2.26 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.14 for local network 172.17.3.0/24.", > "Ping to 172.17.3.14 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", > "Ping to 192.168.24.12 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "SUCCESS", > "", > "[2018-08-20 06:21:03,763] (heat-config) [DEBUG] ", > "[2018-08-20 06:21:03,763] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c94a2cd5-1626-4b20-9e31-3e584b0c8539", > "", > "[2018-08-20 06:21:03,768] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:21:03,768] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.json < /var/lib/heat-config/deployed/c94a2cd5-1626-4b20-9e31-3e584b0c8539.notify.json", > "[2018-08-20 06:21:04,154] (heat-config) [INFO] ", > "[2018-08-20 06:21:04,154] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:04,286 p=1013 u=mistral | TASK [Check-mode for Run deployment ComputeAllNodesValidationDeployment] ******* >2018-08-20 06:21:04,286 p=1013 u=mistral | Monday 20 August 2018 06:21:04 -0400 (0:00:00.078) 0:01:46.316 ********* >2018-08-20 06:21:04,305 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:04,325 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:04,325 p=1013 u=mistral | Monday 20 August 2018 06:21:04 -0400 (0:00:00.038) 0:01:46.355 ********* >2018-08-20 06:21:04,385 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "c4edcc17-3c48-4521-8f2b-89b6c09a8aae"}, "changed": false} >2018-08-20 06:21:04,404 p=1013 u=mistral | TASK [Render deployment file for ComputeArtifactsDeploy] *********************** >2018-08-20 06:21:04,404 p=1013 u=mistral | Monday 20 August 2018 06:21:04 -0400 (0:00:00.078) 0:01:46.434 ********* >2018-08-20 06:21:04,912 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "0850a7664b9ad4a5831502d8c3c1261a72282ef9", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeArtifactsDeploy-c4edcc17-3c48-4521-8f2b-89b6c09a8aae", "gid": 0, "group": "root", "md5sum": "3acc9521d8ef1e5001a1be997d268045", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2015, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760464.46-113403807454798/source", "state": "file", "uid": 0} >2018-08-20 06:21:04,932 p=1013 u=mistral | TASK [Check if deployed file exists for ComputeArtifactsDeploy] **************** >2018-08-20 06:21:04,932 p=1013 u=mistral | Monday 20 August 2018 06:21:04 -0400 (0:00:00.528) 0:01:46.962 ********* >2018-08-20 06:21:05,117 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:05,150 p=1013 u=mistral | TASK [Check previous deployment rc for ComputeArtifactsDeploy] ***************** >2018-08-20 06:21:05,150 p=1013 u=mistral | Monday 20 August 2018 06:21:05 -0400 (0:00:00.217) 0:01:47.180 ********* >2018-08-20 06:21:05,170 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:05,190 p=1013 u=mistral | TASK [Remove deployed file for ComputeArtifactsDeploy when previous deployment failed] *** >2018-08-20 06:21:05,190 p=1013 u=mistral | Monday 20 August 2018 06:21:05 -0400 (0:00:00.040) 0:01:47.220 ********* >2018-08-20 06:21:05,208 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:05,227 p=1013 u=mistral | TASK [Force remove deployed file for ComputeArtifactsDeploy] ******************* >2018-08-20 06:21:05,228 p=1013 u=mistral | Monday 20 August 2018 06:21:05 -0400 (0:00:00.037) 0:01:47.258 ********* >2018-08-20 06:21:05,247 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:05,268 p=1013 u=mistral | TASK [Run deployment ComputeArtifactsDeploy] *********************************** >2018-08-20 06:21:05,268 p=1013 u=mistral | Monday 20 August 2018 06:21:05 -0400 (0:00:00.040) 0:01:47.298 ********* >2018-08-20 06:21:05,917 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.notify.json)", "delta": "0:00:00.426310", "end": "2018-08-20 06:21:05.893428", "rc": 0, "start": "2018-08-20 06:21:05.467118", "stderr": "[2018-08-20 06:21:05,492] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.json\n[2018-08-20 06:21:05,524] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:05,524] (heat-config) [DEBUG] [2018-08-20 06:21:05,515] (heat-config) [INFO] artifact_urls=\n[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3\n[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-7viusabteozk-ComputeArtifactsDeploy-s6tgtv6qchg4-0-yheckpyxcqw3/cc235c5b-e771-4fc2-b2ba-28ef979f2793\n[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:21:05,515] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c4edcc17-3c48-4521-8f2b-89b6c09a8aae\n[2018-08-20 06:21:05,521] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-08-20 06:21:05,521] (heat-config) [DEBUG] \n[2018-08-20 06:21:05,521] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c4edcc17-3c48-4521-8f2b-89b6c09a8aae\n\n[2018-08-20 06:21:05,524] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:21:05,525] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.json < /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.notify.json\n[2018-08-20 06:21:05,886] (heat-config) [INFO] \n[2018-08-20 06:21:05,886] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:05,492] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.json", "[2018-08-20 06:21:05,524] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:05,524] (heat-config) [DEBUG] [2018-08-20 06:21:05,515] (heat-config) [INFO] artifact_urls=", "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-7viusabteozk-ComputeArtifactsDeploy-s6tgtv6qchg4-0-yheckpyxcqw3/cc235c5b-e771-4fc2-b2ba-28ef979f2793", "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:21:05,515] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c4edcc17-3c48-4521-8f2b-89b6c09a8aae", "[2018-08-20 06:21:05,521] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-08-20 06:21:05,521] (heat-config) [DEBUG] ", "[2018-08-20 06:21:05,521] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c4edcc17-3c48-4521-8f2b-89b6c09a8aae", "", "[2018-08-20 06:21:05,524] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:21:05,525] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.json < /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.notify.json", "[2018-08-20 06:21:05,886] (heat-config) [INFO] ", "[2018-08-20 06:21:05,886] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:05,940 p=1013 u=mistral | TASK [Output for ComputeArtifactsDeploy] *************************************** >2018-08-20 06:21:05,940 p=1013 u=mistral | Monday 20 August 2018 06:21:05 -0400 (0:00:00.671) 0:01:47.970 ********* >2018-08-20 06:21:06,067 p=1013 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:05,492] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.json", > "[2018-08-20 06:21:05,524] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:05,524] (heat-config) [DEBUG] [2018-08-20 06:21:05,515] (heat-config) [INFO] artifact_urls=", > "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_server_id=4072d5ff-8bed-44d6-95f3-487e46ddd7d3", > "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-7viusabteozk-ComputeArtifactsDeploy-s6tgtv6qchg4-0-yheckpyxcqw3/cc235c5b-e771-4fc2-b2ba-28ef979f2793", > "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:21:05,515] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:21:05,515] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c4edcc17-3c48-4521-8f2b-89b6c09a8aae", > "[2018-08-20 06:21:05,521] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-08-20 06:21:05,521] (heat-config) [DEBUG] ", > "[2018-08-20 06:21:05,521] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c4edcc17-3c48-4521-8f2b-89b6c09a8aae", > "", > "[2018-08-20 06:21:05,524] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:21:05,525] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.json < /var/lib/heat-config/deployed/c4edcc17-3c48-4521-8f2b-89b6c09a8aae.notify.json", > "[2018-08-20 06:21:05,886] (heat-config) [INFO] ", > "[2018-08-20 06:21:05,886] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:06,090 p=1013 u=mistral | TASK [Check-mode for Run deployment ComputeArtifactsDeploy] ******************** >2018-08-20 06:21:06,090 p=1013 u=mistral | Monday 20 August 2018 06:21:06 -0400 (0:00:00.150) 0:01:48.120 ********* >2018-08-20 06:21:06,105 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:06,123 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:06,123 p=1013 u=mistral | Monday 20 August 2018 06:21:06 -0400 (0:00:00.033) 0:01:48.153 ********* >2018-08-20 06:21:06,255 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"deployment_uuid": "957f219a-9d62-48a1-80d0-d5e45c1aa590"}, "changed": false} >2018-08-20 06:21:06,277 p=1013 u=mistral | TASK [Render deployment file for ComputeHostPrepDeployment] ******************** >2018-08-20 06:21:06,278 p=1013 u=mistral | Monday 20 August 2018 06:21:06 -0400 (0:00:00.154) 0:01:48.307 ********* >2018-08-20 06:21:06,831 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "94f1100dbecfdcb4013b25eebbd254db95f5f04a", "dest": "/var/lib/heat-config/tripleo-config-download/ComputeHostPrepDeployment-957f219a-9d62-48a1-80d0-d5e45c1aa590", "gid": 0, "group": "root", "md5sum": "172c0f3953c7d43bcbc3bf48511dffdf", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 20014, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760466.41-164923913274531/source", "state": "file", "uid": 0} >2018-08-20 06:21:06,852 p=1013 u=mistral | TASK [Check if deployed file exists for ComputeHostPrepDeployment] ************* >2018-08-20 06:21:06,852 p=1013 u=mistral | Monday 20 August 2018 06:21:06 -0400 (0:00:00.574) 0:01:48.882 ********* >2018-08-20 06:21:07,088 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:07,109 p=1013 u=mistral | TASK [Check previous deployment rc for ComputeHostPrepDeployment] ************** >2018-08-20 06:21:07,109 p=1013 u=mistral | Monday 20 August 2018 06:21:07 -0400 (0:00:00.256) 0:01:49.139 ********* >2018-08-20 06:21:07,130 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:07,196 p=1013 u=mistral | TASK [Remove deployed file for ComputeHostPrepDeployment when previous deployment failed] *** >2018-08-20 06:21:07,196 p=1013 u=mistral | Monday 20 August 2018 06:21:07 -0400 (0:00:00.086) 0:01:49.226 ********* >2018-08-20 06:21:07,216 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:07,238 p=1013 u=mistral | TASK [Force remove deployed file for ComputeHostPrepDeployment] **************** >2018-08-20 06:21:07,239 p=1013 u=mistral | Monday 20 August 2018 06:21:07 -0400 (0:00:00.042) 0:01:49.269 ********* >2018-08-20 06:21:07,259 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:07,280 p=1013 u=mistral | TASK [Run deployment ComputeHostPrepDeployment] ******************************** >2018-08-20 06:21:07,280 p=1013 u=mistral | Monday 20 August 2018 06:21:07 -0400 (0:00:00.041) 0:01:49.310 ********* >2018-08-20 06:21:13,500 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.notify.json)", "delta": "0:00:06.027246", "end": "2018-08-20 06:21:13.477684", "rc": 0, "start": "2018-08-20 06:21:07.450438", "stderr": "[2018-08-20 06:21:07,474] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.json\n[2018-08-20 06:21:13,113] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:13,113] (heat-config) [DEBUG] [2018-08-20 06:21:07,495] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/957f219a-9d62-48a1-80d0-d5e45c1aa590_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/957f219a-9d62-48a1-80d0-d5e45c1aa590_variables.json\n[2018-08-20 06:21:13,109] (heat-config) [INFO] Return code 0\n[2018-08-20 06:21:13,109] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-08-20 06:21:13,109] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/957f219a-9d62-48a1-80d0-d5e45c1aa590_playbook.yaml\n\n[2018-08-20 06:21:13,113] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-08-20 06:21:13,114] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.json < /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.notify.json\n[2018-08-20 06:21:13,471] (heat-config) [INFO] \n[2018-08-20 06:21:13,472] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:07,474] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.json", "[2018-08-20 06:21:13,113] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:13,113] (heat-config) [DEBUG] [2018-08-20 06:21:07,495] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/957f219a-9d62-48a1-80d0-d5e45c1aa590_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/957f219a-9d62-48a1-80d0-d5e45c1aa590_variables.json", "[2018-08-20 06:21:13,109] (heat-config) [INFO] Return code 0", "[2018-08-20 06:21:13,109] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-08-20 06:21:13,109] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/957f219a-9d62-48a1-80d0-d5e45c1aa590_playbook.yaml", "", "[2018-08-20 06:21:13,113] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-08-20 06:21:13,114] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.json < /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.notify.json", "[2018-08-20 06:21:13,471] (heat-config) [INFO] ", "[2018-08-20 06:21:13,472] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:13,521 p=1013 u=mistral | TASK [Output for ComputeHostPrepDeployment] ************************************ >2018-08-20 06:21:13,521 p=1013 u=mistral | Monday 20 August 2018 06:21:13 -0400 (0:00:06.241) 0:01:55.551 ********* >2018-08-20 06:21:13,573 p=1013 u=mistral | ok: [compute-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:07,474] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.json", > "[2018-08-20 06:21:13,113] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:13,113] (heat-config) [DEBUG] [2018-08-20 06:21:07,495] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/957f219a-9d62-48a1-80d0-d5e45c1aa590_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/957f219a-9d62-48a1-80d0-d5e45c1aa590_variables.json", > "[2018-08-20 06:21:13,109] (heat-config) [INFO] Return code 0", > "[2018-08-20 06:21:13,109] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-08-20 06:21:13,109] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/957f219a-9d62-48a1-80d0-d5e45c1aa590_playbook.yaml", > "", > "[2018-08-20 06:21:13,113] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-08-20 06:21:13,114] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.json < /var/lib/heat-config/deployed/957f219a-9d62-48a1-80d0-d5e45c1aa590.notify.json", > "[2018-08-20 06:21:13,471] (heat-config) [INFO] ", > "[2018-08-20 06:21:13,472] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:13,592 p=1013 u=mistral | TASK [Check-mode for Run deployment ComputeHostPrepDeployment] ***************** >2018-08-20 06:21:13,592 p=1013 u=mistral | Monday 20 August 2018 06:21:13 -0400 (0:00:00.070) 0:01:55.622 ********* >2018-08-20 06:21:13,610 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:13,634 p=1013 u=mistral | TASK [include_tasks] *********************************************************** >2018-08-20 06:21:13,634 p=1013 u=mistral | Monday 20 August 2018 06:21:13 -0400 (0:00:00.042) 0:01:55.664 ********* >2018-08-20 06:21:13,726 p=1013 u=mistral | TASK [include_tasks] *********************************************************** >2018-08-20 06:21:13,727 p=1013 u=mistral | Monday 20 August 2018 06:21:13 -0400 (0:00:00.092) 0:01:55.757 ********* >2018-08-20 06:21:13,809 p=1013 u=mistral | TASK [include_tasks] *********************************************************** >2018-08-20 06:21:13,809 p=1013 u=mistral | Monday 20 August 2018 06:21:13 -0400 (0:00:00.082) 0:01:55.839 ********* >2018-08-20 06:21:14,071 p=1013 u=mistral | included: /var/lib/mistral/overcloud/CephStorage/deployments.yaml for ceph-0 >2018-08-20 06:21:14,079 p=1013 u=mistral | included: /var/lib/mistral/overcloud/CephStorage/deployments.yaml for ceph-0 >2018-08-20 06:21:14,087 p=1013 u=mistral | included: /var/lib/mistral/overcloud/CephStorage/deployments.yaml for ceph-0 >2018-08-20 06:21:14,097 p=1013 u=mistral | included: /var/lib/mistral/overcloud/CephStorage/deployments.yaml for ceph-0 >2018-08-20 06:21:14,104 p=1013 u=mistral | included: /var/lib/mistral/overcloud/CephStorage/deployments.yaml for ceph-0 >2018-08-20 06:21:14,112 p=1013 u=mistral | included: /var/lib/mistral/overcloud/CephStorage/deployments.yaml for ceph-0 >2018-08-20 06:21:14,120 p=1013 u=mistral | included: /var/lib/mistral/overcloud/CephStorage/deployments.yaml for ceph-0 >2018-08-20 06:21:14,128 p=1013 u=mistral | included: /var/lib/mistral/overcloud/CephStorage/deployments.yaml for ceph-0 >2018-08-20 06:21:14,136 p=1013 u=mistral | included: /var/lib/mistral/overcloud/CephStorage/deployments.yaml for ceph-0 >2018-08-20 06:21:14,211 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:14,211 p=1013 u=mistral | Monday 20 August 2018 06:21:14 -0400 (0:00:00.402) 0:01:56.241 ********* >2018-08-20 06:21:14,276 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "cdf871d6-d7eb-4494-991f-4ce4e8da547b"}, "changed": false} >2018-08-20 06:21:14,296 p=1013 u=mistral | TASK [Render deployment file for NetworkDeployment] **************************** >2018-08-20 06:21:14,296 p=1013 u=mistral | Monday 20 August 2018 06:21:14 -0400 (0:00:00.084) 0:01:56.326 ********* >2018-08-20 06:21:14,809 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "3749ca72e7d67756da47ded9d43ecc6a42313ee4", "dest": "/var/lib/heat-config/tripleo-config-download/NetworkDeployment-cdf871d6-d7eb-4494-991f-4ce4e8da547b", "gid": 0, "group": "root", "md5sum": "797c7125c818b4f994a83307aec3321f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 8777, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760474.35-28749129440960/source", "state": "file", "uid": 0} >2018-08-20 06:21:14,830 p=1013 u=mistral | TASK [Check if deployed file exists for NetworkDeployment] ********************* >2018-08-20 06:21:14,830 p=1013 u=mistral | Monday 20 August 2018 06:21:14 -0400 (0:00:00.533) 0:01:56.860 ********* >2018-08-20 06:21:15,018 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:15,042 p=1013 u=mistral | TASK [Check previous deployment rc for NetworkDeployment] ********************** >2018-08-20 06:21:15,042 p=1013 u=mistral | Monday 20 August 2018 06:21:15 -0400 (0:00:00.212) 0:01:57.072 ********* >2018-08-20 06:21:15,062 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:15,083 p=1013 u=mistral | TASK [Remove deployed file for NetworkDeployment when previous deployment failed] *** >2018-08-20 06:21:15,083 p=1013 u=mistral | Monday 20 August 2018 06:21:15 -0400 (0:00:00.040) 0:01:57.113 ********* >2018-08-20 06:21:15,104 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:15,123 p=1013 u=mistral | TASK [Force remove deployed file for NetworkDeployment] ************************ >2018-08-20 06:21:15,123 p=1013 u=mistral | Monday 20 August 2018 06:21:15 -0400 (0:00:00.040) 0:01:57.153 ********* >2018-08-20 06:21:15,142 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:15,161 p=1013 u=mistral | TASK [Run deployment NetworkDeployment] **************************************** >2018-08-20 06:21:15,161 p=1013 u=mistral | Monday 20 August 2018 06:21:15 -0400 (0:00:00.037) 0:01:57.191 ********* >2018-08-20 06:21:30,395 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.notify.json)", "delta": "0:00:15.047214", "end": "2018-08-20 06:21:30.366801", "rc": 0, "start": "2018-08-20 06:21:15.319587", "stderr": "[2018-08-20 06:21:15,343] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.json\n[2018-08-20 06:21:29,963] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/08/20 06:21:15 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/08/20 06:21:15 AM] [INFO] Ifcfg net config provider created.\\n[2018/08/20 06:21:15 AM] [INFO] Not using any mapping file.\\n[2018/08/20 06:21:16 AM] [INFO] Finding active nics\\n[2018/08/20 06:21:16 AM] [INFO] eth2 is an embedded active nic\\n[2018/08/20 06:21:16 AM] [INFO] eth0 is an embedded active nic\\n[2018/08/20 06:21:16 AM] [INFO] eth1 is an embedded active nic\\n[2018/08/20 06:21:16 AM] [INFO] lo is not an active nic\\n[2018/08/20 06:21:16 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/08/20 06:21:16 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/08/20 06:21:16 AM] [INFO] nic3 mapped to: eth2\\n[2018/08/20 06:21:16 AM] [INFO] nic2 mapped to: eth1\\n[2018/08/20 06:21:16 AM] [INFO] nic1 mapped to: eth0\\n[2018/08/20 06:21:16 AM] [INFO] adding interface: eth0\\n[2018/08/20 06:21:16 AM] [INFO] adding custom route for interface: eth0\\n[2018/08/20 06:21:16 AM] [INFO] adding bridge: br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] adding interface: eth1\\n[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan30\\n[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan40\\n[2018/08/20 06:21:16 AM] [INFO] applying network configs...\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth1\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth0\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/08/20 06:21:16 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth1\\n[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth0\\n[2018/08/20 06:21:20 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:21:25 AM] [INFO] running ifup on interface: vlan40\\n[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:29,963] (heat-config) [DEBUG] [2018-08-20 06:21:15,364] (heat-config) [INFO] interface_name=nic1\n[2018-08-20 06:21:15,364] (heat-config) [INFO] bridge_name=br-ex\n[2018-08-20 06:21:15,364] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca\n[2018-08-20 06:21:15,364] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:21:15,364] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-jxwk3dmbyvt2-0-m5vsdnxq3szm-NetworkDeployment-g74aasaa2l3o-TripleOSoftwareDeployment-va7j67yeb2ud/8c538290-dc41-492d-924c-d1bc50eed838\n[2018-08-20 06:21:15,365] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:21:15,365] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:21:15,365] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/cdf871d6-d7eb-4494-991f-4ce4e8da547b\n[2018-08-20 06:21:29,959] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS\n\n[2018-08-20 06:21:29,959] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2018/08/20 06:21:15 AM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2018/08/20 06:21:15 AM] [INFO] Ifcfg net config provider created.\n[2018/08/20 06:21:15 AM] [INFO] Not using any mapping file.\n[2018/08/20 06:21:16 AM] [INFO] Finding active nics\n[2018/08/20 06:21:16 AM] [INFO] eth2 is an embedded active nic\n[2018/08/20 06:21:16 AM] [INFO] eth0 is an embedded active nic\n[2018/08/20 06:21:16 AM] [INFO] eth1 is an embedded active nic\n[2018/08/20 06:21:16 AM] [INFO] lo is not an active nic\n[2018/08/20 06:21:16 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2018/08/20 06:21:16 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\n[2018/08/20 06:21:16 AM] [INFO] nic3 mapped to: eth2\n[2018/08/20 06:21:16 AM] [INFO] nic2 mapped to: eth1\n[2018/08/20 06:21:16 AM] [INFO] nic1 mapped to: eth0\n[2018/08/20 06:21:16 AM] [INFO] adding interface: eth0\n[2018/08/20 06:21:16 AM] [INFO] adding custom route for interface: eth0\n[2018/08/20 06:21:16 AM] [INFO] adding bridge: br-isolated\n[2018/08/20 06:21:16 AM] [INFO] adding interface: eth1\n[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan30\n[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan40\n[2018/08/20 06:21:16 AM] [INFO] applying network configs...\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth1\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth0\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on bridge: br-isolated\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\n[2018/08/20 06:21:16 AM] [INFO] running ifup on bridge: br-isolated\n[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth1\n[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth0\n[2018/08/20 06:21:20 AM] [INFO] running ifup on interface: vlan30\n[2018/08/20 06:21:25 AM] [INFO] running ifup on interface: vlan40\n[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan30\n[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan40\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.2\n++ '[' -n 192.168.24.2 ']'\n++ break\n++ echo 192.168.24.2\n+ local METADATA_IP=192.168.24.2\n+ '[' -n 192.168.24.2 ']'\n+ is_local_ip 192.168.24.2\n+ local IP_TO_CHECK=192.168.24.2\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.2/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\n+ _ping=ping\n+ [[ 192.168.24.2 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.2\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n\n[2018-08-20 06:21:29,959] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/cdf871d6-d7eb-4494-991f-4ce4e8da547b\n\n[2018-08-20 06:21:29,963] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:21:29,964] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.json < /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.notify.json\n[2018-08-20 06:21:30,360] (heat-config) [INFO] \n[2018-08-20 06:21:30,360] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:15,343] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.json", "[2018-08-20 06:21:29,963] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/08/20 06:21:15 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/08/20 06:21:15 AM] [INFO] Ifcfg net config provider created.\\n[2018/08/20 06:21:15 AM] [INFO] Not using any mapping file.\\n[2018/08/20 06:21:16 AM] [INFO] Finding active nics\\n[2018/08/20 06:21:16 AM] [INFO] eth2 is an embedded active nic\\n[2018/08/20 06:21:16 AM] [INFO] eth0 is an embedded active nic\\n[2018/08/20 06:21:16 AM] [INFO] eth1 is an embedded active nic\\n[2018/08/20 06:21:16 AM] [INFO] lo is not an active nic\\n[2018/08/20 06:21:16 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/08/20 06:21:16 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/08/20 06:21:16 AM] [INFO] nic3 mapped to: eth2\\n[2018/08/20 06:21:16 AM] [INFO] nic2 mapped to: eth1\\n[2018/08/20 06:21:16 AM] [INFO] nic1 mapped to: eth0\\n[2018/08/20 06:21:16 AM] [INFO] adding interface: eth0\\n[2018/08/20 06:21:16 AM] [INFO] adding custom route for interface: eth0\\n[2018/08/20 06:21:16 AM] [INFO] adding bridge: br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] adding interface: eth1\\n[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan30\\n[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan40\\n[2018/08/20 06:21:16 AM] [INFO] applying network configs...\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth1\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth0\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/08/20 06:21:16 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth1\\n[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth0\\n[2018/08/20 06:21:20 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:21:25 AM] [INFO] running ifup on interface: vlan40\\n[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:29,963] (heat-config) [DEBUG] [2018-08-20 06:21:15,364] (heat-config) [INFO] interface_name=nic1", "[2018-08-20 06:21:15,364] (heat-config) [INFO] bridge_name=br-ex", "[2018-08-20 06:21:15,364] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", "[2018-08-20 06:21:15,364] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:21:15,364] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-jxwk3dmbyvt2-0-m5vsdnxq3szm-NetworkDeployment-g74aasaa2l3o-TripleOSoftwareDeployment-va7j67yeb2ud/8c538290-dc41-492d-924c-d1bc50eed838", "[2018-08-20 06:21:15,365] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:21:15,365] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:21:15,365] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/cdf871d6-d7eb-4494-991f-4ce4e8da547b", "[2018-08-20 06:21:29,959] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", "", "[2018-08-20 06:21:29,959] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", "+ '[' -z '' ']'", "+ trap configure_safe_defaults EXIT", "+ mkdir -p /etc/os-net-config", "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", "++ type -t network_config_hook", "+ '[' '' = function ']'", "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", "+ set +e", "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", "[2018/08/20 06:21:15 AM] [INFO] Using config file at: /etc/os-net-config/config.json", "[2018/08/20 06:21:15 AM] [INFO] Ifcfg net config provider created.", "[2018/08/20 06:21:15 AM] [INFO] Not using any mapping file.", "[2018/08/20 06:21:16 AM] [INFO] Finding active nics", "[2018/08/20 06:21:16 AM] [INFO] eth2 is an embedded active nic", "[2018/08/20 06:21:16 AM] [INFO] eth0 is an embedded active nic", "[2018/08/20 06:21:16 AM] [INFO] eth1 is an embedded active nic", "[2018/08/20 06:21:16 AM] [INFO] lo is not an active nic", "[2018/08/20 06:21:16 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", "[2018/08/20 06:21:16 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", "[2018/08/20 06:21:16 AM] [INFO] nic3 mapped to: eth2", "[2018/08/20 06:21:16 AM] [INFO] nic2 mapped to: eth1", "[2018/08/20 06:21:16 AM] [INFO] nic1 mapped to: eth0", "[2018/08/20 06:21:16 AM] [INFO] adding interface: eth0", "[2018/08/20 06:21:16 AM] [INFO] adding custom route for interface: eth0", "[2018/08/20 06:21:16 AM] [INFO] adding bridge: br-isolated", "[2018/08/20 06:21:16 AM] [INFO] adding interface: eth1", "[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan30", "[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan40", "[2018/08/20 06:21:16 AM] [INFO] applying network configs...", "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30", "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40", "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth1", "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth0", "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30", "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40", "[2018/08/20 06:21:16 AM] [INFO] running ifdown on bridge: br-isolated", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", "[2018/08/20 06:21:16 AM] [INFO] running ifup on bridge: br-isolated", "[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth1", "[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth0", "[2018/08/20 06:21:20 AM] [INFO] running ifup on interface: vlan30", "[2018/08/20 06:21:25 AM] [INFO] running ifup on interface: vlan40", "[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan30", "[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan40", "+ RETVAL=2", "+ set -e", "+ [[ 2 == 2 ]]", "+ ping_metadata_ip", "++ get_metadata_ip", "++ local METADATA_IP", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=", "++ '[' -n '' ']'", "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", "++ METADATA_IP=192.168.24.2", "++ '[' -n 192.168.24.2 ']'", "++ break", "++ echo 192.168.24.2", "+ local METADATA_IP=192.168.24.2", "+ '[' -n 192.168.24.2 ']'", "+ is_local_ip 192.168.24.2", "+ local IP_TO_CHECK=192.168.24.2", "+ ip -o a", "+ grep 'inet6\\? 192.168.24.2/'", "+ return 1", "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", "+ _ping=ping", "+ [[ 192.168.24.2 =~ : ]]", "+ local COUNT=0", "+ ping -c 1 192.168.24.2", "+ echo SUCCESS", "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", "+ configure_safe_defaults", "+ [[ 0 == 0 ]]", "+ return 0", "", "[2018-08-20 06:21:29,959] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/cdf871d6-d7eb-4494-991f-4ce4e8da547b", "", "[2018-08-20 06:21:29,963] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:21:29,964] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.json < /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.notify.json", "[2018-08-20 06:21:30,360] (heat-config) [INFO] ", "[2018-08-20 06:21:30,360] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:30,417 p=1013 u=mistral | TASK [Output for NetworkDeployment] ******************************************** >2018-08-20 06:21:30,417 p=1013 u=mistral | Monday 20 August 2018 06:21:30 -0400 (0:00:15.256) 0:02:12.447 ********* >2018-08-20 06:21:30,475 p=1013 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:15,343] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.json", > "[2018-08-20 06:21:29,963] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping metadata IP 192.168.24.2...SUCCESS\\n\", \"deploy_stderr\": \"+ '[' -n '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}' ']'\\n+ '[' -z '' ']'\\n+ trap configure_safe_defaults EXIT\\n+ mkdir -p /etc/os-net-config\\n+ echo '{\\\"network_config\\\": [{\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"192.168.24.16/24\\\"}], \\\"dns_servers\\\": [\\\"10.0.0.1\\\"], \\\"name\\\": \\\"nic1\\\", \\\"routes\\\": [{\\\"default\\\": true, \\\"ip_netmask\\\": \\\"0.0.0.0/0\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}, {\\\"ip_netmask\\\": \\\"169.254.169.254/32\\\", \\\"next_hop\\\": \\\"192.168.24.1\\\"}], \\\"type\\\": \\\"interface\\\", \\\"use_dhcp\\\": false}, {\\\"members\\\": [{\\\"name\\\": \\\"nic2\\\", \\\"primary\\\": true, \\\"type\\\": \\\"interface\\\"}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.3.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 30}, {\\\"addresses\\\": [{\\\"ip_netmask\\\": \\\"172.17.4.10/24\\\"}], \\\"type\\\": \\\"vlan\\\", \\\"vlan_id\\\": 40}], \\\"name\\\": \\\"br-isolated\\\", \\\"type\\\": \\\"ovs_bridge\\\", \\\"use_dhcp\\\": false}]}'\\n++ type -t network_config_hook\\n+ '[' '' = function ']'\\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\\n+ set +e\\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\\n[2018/08/20 06:21:15 AM] [INFO] Using config file at: /etc/os-net-config/config.json\\n[2018/08/20 06:21:15 AM] [INFO] Ifcfg net config provider created.\\n[2018/08/20 06:21:15 AM] [INFO] Not using any mapping file.\\n[2018/08/20 06:21:16 AM] [INFO] Finding active nics\\n[2018/08/20 06:21:16 AM] [INFO] eth2 is an embedded active nic\\n[2018/08/20 06:21:16 AM] [INFO] eth0 is an embedded active nic\\n[2018/08/20 06:21:16 AM] [INFO] eth1 is an embedded active nic\\n[2018/08/20 06:21:16 AM] [INFO] lo is not an active nic\\n[2018/08/20 06:21:16 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\\n[2018/08/20 06:21:16 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']\\n[2018/08/20 06:21:16 AM] [INFO] nic3 mapped to: eth2\\n[2018/08/20 06:21:16 AM] [INFO] nic2 mapped to: eth1\\n[2018/08/20 06:21:16 AM] [INFO] nic1 mapped to: eth0\\n[2018/08/20 06:21:16 AM] [INFO] adding interface: eth0\\n[2018/08/20 06:21:16 AM] [INFO] adding custom route for interface: eth0\\n[2018/08/20 06:21:16 AM] [INFO] adding bridge: br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] adding interface: eth1\\n[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan30\\n[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan40\\n[2018/08/20 06:21:16 AM] [INFO] applying network configs...\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth1\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth0\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40\\n[2018/08/20 06:21:16 AM] [INFO] running ifdown on bridge: br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0\\n[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1\\n[2018/08/20 06:21:16 AM] [INFO] running ifup on bridge: br-isolated\\n[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth1\\n[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth0\\n[2018/08/20 06:21:20 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:21:25 AM] [INFO] running ifup on interface: vlan40\\n[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan30\\n[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan40\\n+ RETVAL=2\\n+ set -e\\n+ [[ 2 == 2 ]]\\n+ ping_metadata_ip\\n++ get_metadata_ip\\n++ local METADATA_IP\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=\\n++ '[' -n '' ']'\\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\\n+++ sed -e 's|http.*://\\\\[\\\\?\\\\([^]]*\\\\)]\\\\?:.*|\\\\1|'\\n++ METADATA_IP=192.168.24.2\\n++ '[' -n 192.168.24.2 ']'\\n++ break\\n++ echo 192.168.24.2\\n+ local METADATA_IP=192.168.24.2\\n+ '[' -n 192.168.24.2 ']'\\n+ is_local_ip 192.168.24.2\\n+ local IP_TO_CHECK=192.168.24.2\\n+ ip -o a\\n+ grep 'inet6\\\\? 192.168.24.2/'\\n+ return 1\\n+ echo -n 'Trying to ping metadata IP 192.168.24.2...'\\n+ _ping=ping\\n+ [[ 192.168.24.2 =~ : ]]\\n+ local COUNT=0\\n+ ping -c 1 192.168.24.2\\n+ echo SUCCESS\\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\\n+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\\n+ configure_safe_defaults\\n+ [[ 0 == 0 ]]\\n+ return 0\\n\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:29,963] (heat-config) [DEBUG] [2018-08-20 06:21:15,364] (heat-config) [INFO] interface_name=nic1", > "[2018-08-20 06:21:15,364] (heat-config) [INFO] bridge_name=br-ex", > "[2018-08-20 06:21:15,364] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", > "[2018-08-20 06:21:15,364] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:21:15,364] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-jxwk3dmbyvt2-0-m5vsdnxq3szm-NetworkDeployment-g74aasaa2l3o-TripleOSoftwareDeployment-va7j67yeb2ud/8c538290-dc41-492d-924c-d1bc50eed838", > "[2018-08-20 06:21:15,365] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:21:15,365] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:21:15,365] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/cdf871d6-d7eb-4494-991f-4ce4e8da547b", > "[2018-08-20 06:21:29,959] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.2...SUCCESS", > "", > "[2018-08-20 06:21:29,959] (heat-config) [DEBUG] + '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'", > "+ '[' -z '' ']'", > "+ trap configure_safe_defaults EXIT", > "+ mkdir -p /etc/os-net-config", > "+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.16/24\"}], \"dns_servers\": [\"10.0.0.1\"], \"name\": \"nic1\", \"routes\": [{\"default\": true, \"ip_netmask\": \"0.0.0.0/0\", \"next_hop\": \"192.168.24.1\"}, {\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"members\": [{\"name\": \"nic2\", \"primary\": true, \"type\": \"interface\"}, {\"addresses\": [{\"ip_netmask\": \"172.17.3.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 30}, {\"addresses\": [{\"ip_netmask\": \"172.17.4.10/24\"}], \"type\": \"vlan\", \"vlan_id\": 40}], \"name\": \"br-isolated\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'", > "++ type -t network_config_hook", > "+ '[' '' = function ']'", > "+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json", > "+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json", > "+ set +e", > "+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes", > "[2018/08/20 06:21:15 AM] [INFO] Using config file at: /etc/os-net-config/config.json", > "[2018/08/20 06:21:15 AM] [INFO] Ifcfg net config provider created.", > "[2018/08/20 06:21:15 AM] [INFO] Not using any mapping file.", > "[2018/08/20 06:21:16 AM] [INFO] Finding active nics", > "[2018/08/20 06:21:16 AM] [INFO] eth2 is an embedded active nic", > "[2018/08/20 06:21:16 AM] [INFO] eth0 is an embedded active nic", > "[2018/08/20 06:21:16 AM] [INFO] eth1 is an embedded active nic", > "[2018/08/20 06:21:16 AM] [INFO] lo is not an active nic", > "[2018/08/20 06:21:16 AM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)", > "[2018/08/20 06:21:16 AM] [INFO] Active nics are ['eth0', 'eth1', 'eth2']", > "[2018/08/20 06:21:16 AM] [INFO] nic3 mapped to: eth2", > "[2018/08/20 06:21:16 AM] [INFO] nic2 mapped to: eth1", > "[2018/08/20 06:21:16 AM] [INFO] nic1 mapped to: eth0", > "[2018/08/20 06:21:16 AM] [INFO] adding interface: eth0", > "[2018/08/20 06:21:16 AM] [INFO] adding custom route for interface: eth0", > "[2018/08/20 06:21:16 AM] [INFO] adding bridge: br-isolated", > "[2018/08/20 06:21:16 AM] [INFO] adding interface: eth1", > "[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan30", > "[2018/08/20 06:21:16 AM] [INFO] adding vlan: vlan40", > "[2018/08/20 06:21:16 AM] [INFO] applying network configs...", > "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30", > "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40", > "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth1", > "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: eth0", > "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan30", > "[2018/08/20 06:21:16 AM] [INFO] running ifdown on interface: vlan40", > "[2018/08/20 06:21:16 AM] [INFO] running ifdown on bridge: br-isolated", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-br-isolated", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan40", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-br-isolated", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan30", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth0", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-eth1", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-br-isolated", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-vlan30", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth1", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-eth0", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route6-vlan40", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan40", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/route-vlan30", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth0", > "[2018/08/20 06:21:16 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eth1", > "[2018/08/20 06:21:16 AM] [INFO] running ifup on bridge: br-isolated", > "[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth1", > "[2018/08/20 06:21:16 AM] [INFO] running ifup on interface: eth0", > "[2018/08/20 06:21:20 AM] [INFO] running ifup on interface: vlan30", > "[2018/08/20 06:21:25 AM] [INFO] running ifup on interface: vlan40", > "[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan30", > "[2018/08/20 06:21:29 AM] [INFO] running ifup on interface: vlan40", > "+ RETVAL=2", > "+ set -e", > "+ [[ 2 == 2 ]]", > "+ ping_metadata_ip", > "++ get_metadata_ip", > "++ local METADATA_IP", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=", > "++ '[' -n '' ']'", > "++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url", > "+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw", > "+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'", > "++ METADATA_IP=192.168.24.2", > "++ '[' -n 192.168.24.2 ']'", > "++ break", > "++ echo 192.168.24.2", > "+ local METADATA_IP=192.168.24.2", > "+ '[' -n 192.168.24.2 ']'", > "+ is_local_ip 192.168.24.2", > "+ local IP_TO_CHECK=192.168.24.2", > "+ ip -o a", > "+ grep 'inet6\\? 192.168.24.2/'", > "+ return 1", > "+ echo -n 'Trying to ping metadata IP 192.168.24.2...'", > "+ _ping=ping", > "+ [[ 192.168.24.2 =~ : ]]", > "+ local COUNT=0", > "+ ping -c 1 192.168.24.2", > "+ echo SUCCESS", > "+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'", > "+ rm /etc/udev/rules.d/99-dhcp-all-interfaces.rules", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'", > "+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'", > "+ configure_safe_defaults", > "+ [[ 0 == 0 ]]", > "+ return 0", > "", > "[2018-08-20 06:21:29,959] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/cdf871d6-d7eb-4494-991f-4ce4e8da547b", > "", > "[2018-08-20 06:21:29,963] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:21:29,964] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.json < /var/lib/heat-config/deployed/cdf871d6-d7eb-4494-991f-4ce4e8da547b.notify.json", > "[2018-08-20 06:21:30,360] (heat-config) [INFO] ", > "[2018-08-20 06:21:30,360] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:30,499 p=1013 u=mistral | TASK [Check-mode for Run deployment NetworkDeployment] ************************* >2018-08-20 06:21:30,499 p=1013 u=mistral | Monday 20 August 2018 06:21:30 -0400 (0:00:00.081) 0:02:12.529 ********* >2018-08-20 06:21:30,528 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:30,547 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:30,547 p=1013 u=mistral | Monday 20 August 2018 06:21:30 -0400 (0:00:00.047) 0:02:12.577 ********* >2018-08-20 06:21:30,643 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "d1ff13ef-0b7d-4945-bdc7-a6824927bab8"}, "changed": false} >2018-08-20 06:21:30,663 p=1013 u=mistral | TASK [Render deployment file for CephStorageUpgradeInitDeployment] ************* >2018-08-20 06:21:30,663 p=1013 u=mistral | Monday 20 August 2018 06:21:30 -0400 (0:00:00.116) 0:02:12.693 ********* >2018-08-20 06:21:31,163 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "7e8337a6b159dffebd6956cc7c1ba496a6eeb486", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageUpgradeInitDeployment-d1ff13ef-0b7d-4945-bdc7-a6824927bab8", "gid": 0, "group": "root", "md5sum": "1b04de633c081bb2ac01bf9fad678f74", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1186, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760490.72-255156877573403/source", "state": "file", "uid": 0} >2018-08-20 06:21:31,186 p=1013 u=mistral | TASK [Check if deployed file exists for CephStorageUpgradeInitDeployment] ****** >2018-08-20 06:21:31,186 p=1013 u=mistral | Monday 20 August 2018 06:21:31 -0400 (0:00:00.522) 0:02:13.216 ********* >2018-08-20 06:21:31,364 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:31,385 p=1013 u=mistral | TASK [Check previous deployment rc for CephStorageUpgradeInitDeployment] ******* >2018-08-20 06:21:31,385 p=1013 u=mistral | Monday 20 August 2018 06:21:31 -0400 (0:00:00.198) 0:02:13.415 ********* >2018-08-20 06:21:31,406 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:31,429 p=1013 u=mistral | TASK [Remove deployed file for CephStorageUpgradeInitDeployment when previous deployment failed] *** >2018-08-20 06:21:31,429 p=1013 u=mistral | Monday 20 August 2018 06:21:31 -0400 (0:00:00.044) 0:02:13.459 ********* >2018-08-20 06:21:31,447 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:31,467 p=1013 u=mistral | TASK [Force remove deployed file for CephStorageUpgradeInitDeployment] ********* >2018-08-20 06:21:31,468 p=1013 u=mistral | Monday 20 August 2018 06:21:31 -0400 (0:00:00.038) 0:02:13.497 ********* >2018-08-20 06:21:31,485 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:31,505 p=1013 u=mistral | TASK [Run deployment CephStorageUpgradeInitDeployment] ************************* >2018-08-20 06:21:31,505 p=1013 u=mistral | Monday 20 August 2018 06:21:31 -0400 (0:00:00.037) 0:02:13.535 ********* >2018-08-20 06:21:32,066 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.notify.json)", "delta": "0:00:00.385312", "end": "2018-08-20 06:21:32.045312", "rc": 0, "start": "2018-08-20 06:21:31.660000", "stderr": "[2018-08-20 06:21:31,685] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.json\n[2018-08-20 06:21:31,710] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:31,710] (heat-config) [DEBUG] [2018-08-20 06:21:31,704] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca\n[2018-08-20 06:21:31,704] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:21:31,704] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-jxwk3dmbyvt2-0-m5vsdnxq3szm-CephStorageUpgradeInitDeployment-dfzehmktlbs6/2c729576-0ae7-4c74-bbfc-870ff7737c80\n[2018-08-20 06:21:31,705] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:21:31,705] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:21:31,705] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d1ff13ef-0b7d-4945-bdc7-a6824927bab8\n[2018-08-20 06:21:31,707] (heat-config) [INFO] \n[2018-08-20 06:21:31,707] (heat-config) [DEBUG] \n[2018-08-20 06:21:31,708] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d1ff13ef-0b7d-4945-bdc7-a6824927bab8\n\n[2018-08-20 06:21:31,710] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:21:31,711] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.json < /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.notify.json\n[2018-08-20 06:21:32,039] (heat-config) [INFO] \n[2018-08-20 06:21:32,039] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:31,685] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.json", "[2018-08-20 06:21:31,710] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:31,710] (heat-config) [DEBUG] [2018-08-20 06:21:31,704] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", "[2018-08-20 06:21:31,704] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:21:31,704] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-jxwk3dmbyvt2-0-m5vsdnxq3szm-CephStorageUpgradeInitDeployment-dfzehmktlbs6/2c729576-0ae7-4c74-bbfc-870ff7737c80", "[2018-08-20 06:21:31,705] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:21:31,705] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:21:31,705] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d1ff13ef-0b7d-4945-bdc7-a6824927bab8", "[2018-08-20 06:21:31,707] (heat-config) [INFO] ", "[2018-08-20 06:21:31,707] (heat-config) [DEBUG] ", "[2018-08-20 06:21:31,708] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d1ff13ef-0b7d-4945-bdc7-a6824927bab8", "", "[2018-08-20 06:21:31,710] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:21:31,711] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.json < /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.notify.json", "[2018-08-20 06:21:32,039] (heat-config) [INFO] ", "[2018-08-20 06:21:32,039] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:32,087 p=1013 u=mistral | TASK [Output for CephStorageUpgradeInitDeployment] ***************************** >2018-08-20 06:21:32,088 p=1013 u=mistral | Monday 20 August 2018 06:21:32 -0400 (0:00:00.582) 0:02:14.118 ********* >2018-08-20 06:21:32,146 p=1013 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:31,685] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.json", > "[2018-08-20 06:21:31,710] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:31,710] (heat-config) [DEBUG] [2018-08-20 06:21:31,704] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", > "[2018-08-20 06:21:31,704] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:21:31,704] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-jxwk3dmbyvt2-0-m5vsdnxq3szm-CephStorageUpgradeInitDeployment-dfzehmktlbs6/2c729576-0ae7-4c74-bbfc-870ff7737c80", > "[2018-08-20 06:21:31,705] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:21:31,705] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:21:31,705] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/d1ff13ef-0b7d-4945-bdc7-a6824927bab8", > "[2018-08-20 06:21:31,707] (heat-config) [INFO] ", > "[2018-08-20 06:21:31,707] (heat-config) [DEBUG] ", > "[2018-08-20 06:21:31,708] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/d1ff13ef-0b7d-4945-bdc7-a6824927bab8", > "", > "[2018-08-20 06:21:31,710] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:21:31,711] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.json < /var/lib/heat-config/deployed/d1ff13ef-0b7d-4945-bdc7-a6824927bab8.notify.json", > "[2018-08-20 06:21:32,039] (heat-config) [INFO] ", > "[2018-08-20 06:21:32,039] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:32,168 p=1013 u=mistral | TASK [Check-mode for Run deployment CephStorageUpgradeInitDeployment] ********** >2018-08-20 06:21:32,168 p=1013 u=mistral | Monday 20 August 2018 06:21:32 -0400 (0:00:00.080) 0:02:14.198 ********* >2018-08-20 06:21:32,182 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:32,201 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:32,201 p=1013 u=mistral | Monday 20 August 2018 06:21:32 -0400 (0:00:00.032) 0:02:14.231 ********* >2018-08-20 06:21:32,261 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "15db9ad7-ff6b-4988-a079-2fe6e0416b51"}, "changed": false} >2018-08-20 06:21:32,282 p=1013 u=mistral | TASK [Render deployment file for CADeployment] ********************************* >2018-08-20 06:21:32,282 p=1013 u=mistral | Monday 20 August 2018 06:21:32 -0400 (0:00:00.081) 0:02:14.312 ********* >2018-08-20 06:21:32,779 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "209c9b5b49fc32eabf1ed2e4077242761d02f7cf", "dest": "/var/lib/heat-config/tripleo-config-download/CADeployment-15db9ad7-ff6b-4988-a079-2fe6e0416b51", "gid": 0, "group": "root", "md5sum": "fbdd25bbafc846131c2d4910c6a2c30e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 3000, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760492.34-36272913950038/source", "state": "file", "uid": 0} >2018-08-20 06:21:32,802 p=1013 u=mistral | TASK [Check if deployed file exists for CADeployment] ************************** >2018-08-20 06:21:32,802 p=1013 u=mistral | Monday 20 August 2018 06:21:32 -0400 (0:00:00.520) 0:02:14.832 ********* >2018-08-20 06:21:32,993 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:33,014 p=1013 u=mistral | TASK [Check previous deployment rc for CADeployment] *************************** >2018-08-20 06:21:33,014 p=1013 u=mistral | Monday 20 August 2018 06:21:33 -0400 (0:00:00.211) 0:02:15.044 ********* >2018-08-20 06:21:33,032 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:33,052 p=1013 u=mistral | TASK [Remove deployed file for CADeployment when previous deployment failed] *** >2018-08-20 06:21:33,053 p=1013 u=mistral | Monday 20 August 2018 06:21:33 -0400 (0:00:00.038) 0:02:15.082 ********* >2018-08-20 06:21:33,076 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:33,100 p=1013 u=mistral | TASK [Force remove deployed file for CADeployment] ***************************** >2018-08-20 06:21:33,100 p=1013 u=mistral | Monday 20 August 2018 06:21:33 -0400 (0:00:00.047) 0:02:15.130 ********* >2018-08-20 06:21:33,121 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:33,142 p=1013 u=mistral | TASK [Run deployment CADeployment] ********************************************* >2018-08-20 06:21:33,142 p=1013 u=mistral | Monday 20 August 2018 06:21:33 -0400 (0:00:00.042) 0:02:15.172 ********* >2018-08-20 06:21:34,251 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.notify.json)", "delta": "0:00:00.930344", "end": "2018-08-20 06:21:34.231754", "rc": 0, "start": "2018-08-20 06:21:33.301410", "stderr": "[2018-08-20 06:21:33,321] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.json\n[2018-08-20 06:21:33,902] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"2584ba658ccddd60c9694324a8547fbd /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}\n[2018-08-20 06:21:33,902] (heat-config) [DEBUG] [2018-08-20 06:21:33,339] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem\n[2018-08-20 06:21:33,340] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----\nMIIDlzCCAn+gAwIBAgIJAKeXPqIlS80rMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH\nUmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x\nODA4MjAwOTEyMjZaFw0xOTA4MjAwOTEyMjZaMGIxCzAJBgNVBAYTAlVTMQswCQYD\nVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG\nA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOHsMZOBfdYsz5QF5FJB9EEJUBx5O+mX/iq6tWmkU/uK\nwJo7/7YK+QHvZyTLjGOuhLDH3gkfQ/aaDHlSG5EhLpHTkIGc8c0ABCEfmTlntjq4\nqiz+rpUUelvbM+EW8gZeIecXyf1p0Kwh8mE5jfyB4Gbf/+oeJmwaqmoWJzh2jmNy\ndP7fYpSmu3ZxbTwKT2NaIO+NLWrdRMrtMxlOHKwRZ06FgZ+mlT1RTYh3ebd+MbQg\nzsdYMQ2DTrS8panpYi2Z3Sysb+TanpRTsmRwRXncwdvufjvk5DJP+8Gzq2UP/VQB\nNfHQwIdmrcxI+d4fc3yELvypO7Qui6HWltItoeRfNX8CAwEAAaNQME4wHQYDVR0O\nBBYEFPjqPbuloOP/sUg/EHuGKkE6NgKQMB8GA1UdIwQYMBaAFPjqPbuloOP/sUg/\nEHuGKkE6NgKQMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKJh1k/0\nVC0HgmXjiiFF0HAZ5GYXA2QmD8HM8GOOBxVU26BL7a7TiY57l4MMVSx5ToIvEt0H\nvCkhdZIlv5EdlRfaAzTJ/TnrEq8DDslUPi4oskrHBb5pG2VEtFrXICMPEdHx9fxh\nxxYwkEMeIwoKqvFbDHy/xUQlJ8683HINYEqtLFEWTAvCICEi3vla4NXx08Qw5pTQ\nls8Tv/heAbREztkAcLClwV0qDpSpJDZGF5P6NoKz1+0cdOdZFykO2ncDjqi1S7HP\njeIi6AGdsRZW+Vm+p5WnRjTk/0glo2WDhxSLjbhI2Yr3EqB6Lyct3ZTMJIVZrIGl\nNj+B6Q2NoXe/7ws=\n-----END CERTIFICATE-----\n[2018-08-20 06:21:33,340] (heat-config) [INFO] update_anchor_command=update-ca-trust extract\n[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca\n[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-jxwk3dmbyvt2-0-m5vsdnxq3szm-NodeTLSCAData-qbcaysphthes-CADeployment-2rij4c7mhst3/bddffe07-2645-4a48-ab93-2f3d4cb7887b\n[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:21:33,340] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/15db9ad7-ff6b-4988-a079-2fe6e0416b51\n[2018-08-20 06:21:33,899] (heat-config) [INFO] \n[2018-08-20 06:21:33,899] (heat-config) [DEBUG] \n[2018-08-20 06:21:33,899] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/15db9ad7-ff6b-4988-a079-2fe6e0416b51\n\n[2018-08-20 06:21:33,902] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:21:33,903] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.json < /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.notify.json\n[2018-08-20 06:21:34,226] (heat-config) [INFO] \n[2018-08-20 06:21:34,226] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:33,321] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.json", "[2018-08-20 06:21:33,902] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"2584ba658ccddd60c9694324a8547fbd /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", "[2018-08-20 06:21:33,902] (heat-config) [DEBUG] [2018-08-20 06:21:33,339] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", "[2018-08-20 06:21:33,340] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", "MIIDlzCCAn+gAwIBAgIJAKeXPqIlS80rMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", "ODA4MjAwOTEyMjZaFw0xOTA4MjAwOTEyMjZaMGIxCzAJBgNVBAYTAlVTMQswCQYD", "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", "BQADggEPADCCAQoCggEBAOHsMZOBfdYsz5QF5FJB9EEJUBx5O+mX/iq6tWmkU/uK", "wJo7/7YK+QHvZyTLjGOuhLDH3gkfQ/aaDHlSG5EhLpHTkIGc8c0ABCEfmTlntjq4", "qiz+rpUUelvbM+EW8gZeIecXyf1p0Kwh8mE5jfyB4Gbf/+oeJmwaqmoWJzh2jmNy", "dP7fYpSmu3ZxbTwKT2NaIO+NLWrdRMrtMxlOHKwRZ06FgZ+mlT1RTYh3ebd+MbQg", "zsdYMQ2DTrS8panpYi2Z3Sysb+TanpRTsmRwRXncwdvufjvk5DJP+8Gzq2UP/VQB", "NfHQwIdmrcxI+d4fc3yELvypO7Qui6HWltItoeRfNX8CAwEAAaNQME4wHQYDVR0O", "BBYEFPjqPbuloOP/sUg/EHuGKkE6NgKQMB8GA1UdIwQYMBaAFPjqPbuloOP/sUg/", "EHuGKkE6NgKQMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKJh1k/0", "VC0HgmXjiiFF0HAZ5GYXA2QmD8HM8GOOBxVU26BL7a7TiY57l4MMVSx5ToIvEt0H", "vCkhdZIlv5EdlRfaAzTJ/TnrEq8DDslUPi4oskrHBb5pG2VEtFrXICMPEdHx9fxh", "xxYwkEMeIwoKqvFbDHy/xUQlJ8683HINYEqtLFEWTAvCICEi3vla4NXx08Qw5pTQ", "ls8Tv/heAbREztkAcLClwV0qDpSpJDZGF5P6NoKz1+0cdOdZFykO2ncDjqi1S7HP", "jeIi6AGdsRZW+Vm+p5WnRjTk/0glo2WDhxSLjbhI2Yr3EqB6Lyct3ZTMJIVZrIGl", "Nj+B6Q2NoXe/7ws=", "-----END CERTIFICATE-----", "[2018-08-20 06:21:33,340] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-jxwk3dmbyvt2-0-m5vsdnxq3szm-NodeTLSCAData-qbcaysphthes-CADeployment-2rij4c7mhst3/bddffe07-2645-4a48-ab93-2f3d4cb7887b", "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:21:33,340] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/15db9ad7-ff6b-4988-a079-2fe6e0416b51", "[2018-08-20 06:21:33,899] (heat-config) [INFO] ", "[2018-08-20 06:21:33,899] (heat-config) [DEBUG] ", "[2018-08-20 06:21:33,899] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/15db9ad7-ff6b-4988-a079-2fe6e0416b51", "", "[2018-08-20 06:21:33,902] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:21:33,903] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.json < /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.notify.json", "[2018-08-20 06:21:34,226] (heat-config) [INFO] ", "[2018-08-20 06:21:34,226] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:34,274 p=1013 u=mistral | TASK [Output for CADeployment] ************************************************* >2018-08-20 06:21:34,275 p=1013 u=mistral | Monday 20 August 2018 06:21:34 -0400 (0:00:01.132) 0:02:16.305 ********* >2018-08-20 06:21:34,325 p=1013 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:33,321] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.json", > "[2018-08-20 06:21:33,902] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"root_cert_md5sum\": \"2584ba658ccddd60c9694324a8547fbd /etc/pki/ca-trust/source/anchors/ca.crt.pem\\n\", \"deploy_status_code\": 0, \"deploy_stderr\": \"\"}", > "[2018-08-20 06:21:33,902] (heat-config) [DEBUG] [2018-08-20 06:21:33,339] (heat-config) [INFO] cacert_path=/etc/pki/ca-trust/source/anchors/ca.crt.pem", > "[2018-08-20 06:21:33,340] (heat-config) [INFO] cacert_content=-----BEGIN CERTIFICATE-----", > "MIIDlzCCAn+gAwIBAgIJAKeXPqIlS80rMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV", > "BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwH", > "UmVkIEhBdDELMAkGA1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjAeFw0x", > "ODA4MjAwOTEyMjZaFw0xOTA4MjAwOTEyMjZaMGIxCzAJBgNVBAYTAlVTMQswCQYD", > "VQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECgwHUmVkIEhBdDELMAkG", > "A1UECwwCUUUxFTATBgNVBAMMDDE5Mi4xNjguMjQuMjCCASIwDQYJKoZIhvcNAQEB", > "BQADggEPADCCAQoCggEBAOHsMZOBfdYsz5QF5FJB9EEJUBx5O+mX/iq6tWmkU/uK", > "wJo7/7YK+QHvZyTLjGOuhLDH3gkfQ/aaDHlSG5EhLpHTkIGc8c0ABCEfmTlntjq4", > "qiz+rpUUelvbM+EW8gZeIecXyf1p0Kwh8mE5jfyB4Gbf/+oeJmwaqmoWJzh2jmNy", > "dP7fYpSmu3ZxbTwKT2NaIO+NLWrdRMrtMxlOHKwRZ06FgZ+mlT1RTYh3ebd+MbQg", > "zsdYMQ2DTrS8panpYi2Z3Sysb+TanpRTsmRwRXncwdvufjvk5DJP+8Gzq2UP/VQB", > "NfHQwIdmrcxI+d4fc3yELvypO7Qui6HWltItoeRfNX8CAwEAAaNQME4wHQYDVR0O", > "BBYEFPjqPbuloOP/sUg/EHuGKkE6NgKQMB8GA1UdIwQYMBaAFPjqPbuloOP/sUg/", > "EHuGKkE6NgKQMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAKJh1k/0", > "VC0HgmXjiiFF0HAZ5GYXA2QmD8HM8GOOBxVU26BL7a7TiY57l4MMVSx5ToIvEt0H", > "vCkhdZIlv5EdlRfaAzTJ/TnrEq8DDslUPi4oskrHBb5pG2VEtFrXICMPEdHx9fxh", > "xxYwkEMeIwoKqvFbDHy/xUQlJ8683HINYEqtLFEWTAvCICEi3vla4NXx08Qw5pTQ", > "ls8Tv/heAbREztkAcLClwV0qDpSpJDZGF5P6NoKz1+0cdOdZFykO2ncDjqi1S7HP", > "jeIi6AGdsRZW+Vm+p5WnRjTk/0glo2WDhxSLjbhI2Yr3EqB6Lyct3ZTMJIVZrIGl", > "Nj+B6Q2NoXe/7ws=", > "-----END CERTIFICATE-----", > "[2018-08-20 06:21:33,340] (heat-config) [INFO] update_anchor_command=update-ca-trust extract", > "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", > "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorage-jxwk3dmbyvt2-0-m5vsdnxq3szm-NodeTLSCAData-qbcaysphthes-CADeployment-2rij4c7mhst3/bddffe07-2645-4a48-ab93-2f3d4cb7887b", > "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:21:33,340] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:21:33,340] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/15db9ad7-ff6b-4988-a079-2fe6e0416b51", > "[2018-08-20 06:21:33,899] (heat-config) [INFO] ", > "[2018-08-20 06:21:33,899] (heat-config) [DEBUG] ", > "[2018-08-20 06:21:33,899] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/15db9ad7-ff6b-4988-a079-2fe6e0416b51", > "", > "[2018-08-20 06:21:33,902] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:21:33,903] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.json < /var/lib/heat-config/deployed/15db9ad7-ff6b-4988-a079-2fe6e0416b51.notify.json", > "[2018-08-20 06:21:34,226] (heat-config) [INFO] ", > "[2018-08-20 06:21:34,226] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:34,349 p=1013 u=mistral | TASK [Check-mode for Run deployment CADeployment] ****************************** >2018-08-20 06:21:34,349 p=1013 u=mistral | Monday 20 August 2018 06:21:34 -0400 (0:00:00.074) 0:02:16.379 ********* >2018-08-20 06:21:34,364 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:34,385 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:34,385 p=1013 u=mistral | Monday 20 August 2018 06:21:34 -0400 (0:00:00.036) 0:02:16.415 ********* >2018-08-20 06:21:34,487 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "16290d21-e646-412a-b708-446b434b1ab8"}, "changed": false} >2018-08-20 06:21:34,509 p=1013 u=mistral | TASK [Render deployment file for CephStorageDeployment] ************************ >2018-08-20 06:21:34,509 p=1013 u=mistral | Monday 20 August 2018 06:21:34 -0400 (0:00:00.123) 0:02:16.539 ********* >2018-08-20 06:21:35,009 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "98acbfe029ab7b931a8f7fe47095310ef0c2ae70", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageDeployment-16290d21-e646-412a-b708-446b434b1ab8", "gid": 0, "group": "root", "md5sum": "b2b73a3aba251b6fe49b63b129d81b37", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 9096, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760494.62-255767260025252/source", "state": "file", "uid": 0} >2018-08-20 06:21:35,031 p=1013 u=mistral | TASK [Check if deployed file exists for CephStorageDeployment] ***************** >2018-08-20 06:21:35,032 p=1013 u=mistral | Monday 20 August 2018 06:21:35 -0400 (0:00:00.522) 0:02:17.062 ********* >2018-08-20 06:21:35,203 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:35,229 p=1013 u=mistral | TASK [Check previous deployment rc for CephStorageDeployment] ****************** >2018-08-20 06:21:35,229 p=1013 u=mistral | Monday 20 August 2018 06:21:35 -0400 (0:00:00.197) 0:02:17.259 ********* >2018-08-20 06:21:35,249 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:35,273 p=1013 u=mistral | TASK [Remove deployed file for CephStorageDeployment when previous deployment failed] *** >2018-08-20 06:21:35,274 p=1013 u=mistral | Monday 20 August 2018 06:21:35 -0400 (0:00:00.044) 0:02:17.304 ********* >2018-08-20 06:21:35,293 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:35,318 p=1013 u=mistral | TASK [Force remove deployed file for CephStorageDeployment] ******************** >2018-08-20 06:21:35,318 p=1013 u=mistral | Monday 20 August 2018 06:21:35 -0400 (0:00:00.044) 0:02:17.348 ********* >2018-08-20 06:21:35,337 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:35,358 p=1013 u=mistral | TASK [Run deployment CephStorageDeployment] ************************************ >2018-08-20 06:21:35,358 p=1013 u=mistral | Monday 20 August 2018 06:21:35 -0400 (0:00:00.040) 0:02:17.388 ********* >2018-08-20 06:21:36,016 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.notify.json)", "delta": "0:00:00.478408", "end": "2018-08-20 06:21:35.997380", "rc": 0, "start": "2018-08-20 06:21:35.518972", "stderr": "[2018-08-20 06:21:35,541] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.json\n[2018-08-20 06:21:35,648] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:35,648] (heat-config) [DEBUG] \n[2018-08-20 06:21:35,648] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-08-20 06:21:35,648] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.json < /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.notify.json\n[2018-08-20 06:21:35,991] (heat-config) [INFO] \n[2018-08-20 06:21:35,991] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:35,541] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.json", "[2018-08-20 06:21:35,648] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:35,648] (heat-config) [DEBUG] ", "[2018-08-20 06:21:35,648] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-08-20 06:21:35,648] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.json < /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.notify.json", "[2018-08-20 06:21:35,991] (heat-config) [INFO] ", "[2018-08-20 06:21:35,991] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:36,037 p=1013 u=mistral | TASK [Output for CephStorageDeployment] **************************************** >2018-08-20 06:21:36,038 p=1013 u=mistral | Monday 20 August 2018 06:21:36 -0400 (0:00:00.679) 0:02:18.068 ********* >2018-08-20 06:21:36,095 p=1013 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:35,541] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.json", > "[2018-08-20 06:21:35,648] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:35,648] (heat-config) [DEBUG] ", > "[2018-08-20 06:21:35,648] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-08-20 06:21:35,648] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.json < /var/lib/heat-config/deployed/16290d21-e646-412a-b708-446b434b1ab8.notify.json", > "[2018-08-20 06:21:35,991] (heat-config) [INFO] ", > "[2018-08-20 06:21:35,991] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:36,117 p=1013 u=mistral | TASK [Check-mode for Run deployment CephStorageDeployment] ********************* >2018-08-20 06:21:36,117 p=1013 u=mistral | Monday 20 August 2018 06:21:36 -0400 (0:00:00.079) 0:02:18.147 ********* >2018-08-20 06:21:36,134 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:36,154 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:36,154 p=1013 u=mistral | Monday 20 August 2018 06:21:36 -0400 (0:00:00.036) 0:02:18.184 ********* >2018-08-20 06:21:36,210 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "13055796-5cf9-42fd-933e-edcaf826b985"}, "changed": false} >2018-08-20 06:21:36,232 p=1013 u=mistral | TASK [Render deployment file for CephStorageHostsDeployment] ******************* >2018-08-20 06:21:36,232 p=1013 u=mistral | Monday 20 August 2018 06:21:36 -0400 (0:00:00.078) 0:02:18.262 ********* >2018-08-20 06:21:36,713 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "78e99ef44d2c3d9ad99cbd7a30dc042f748a57b5", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostsDeployment-13055796-5cf9-42fd-933e-edcaf826b985", "gid": 0, "group": "root", "md5sum": "c27fe76e1eec166c4e3ddf1c060cc094", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4437, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760496.29-249364593862079/source", "state": "file", "uid": 0} >2018-08-20 06:21:36,732 p=1013 u=mistral | TASK [Check if deployed file exists for CephStorageHostsDeployment] ************ >2018-08-20 06:21:36,732 p=1013 u=mistral | Monday 20 August 2018 06:21:36 -0400 (0:00:00.500) 0:02:18.762 ********* >2018-08-20 06:21:36,907 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:36,930 p=1013 u=mistral | TASK [Check previous deployment rc for CephStorageHostsDeployment] ************* >2018-08-20 06:21:36,930 p=1013 u=mistral | Monday 20 August 2018 06:21:36 -0400 (0:00:00.197) 0:02:18.960 ********* >2018-08-20 06:21:36,952 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:36,974 p=1013 u=mistral | TASK [Remove deployed file for CephStorageHostsDeployment when previous deployment failed] *** >2018-08-20 06:21:36,975 p=1013 u=mistral | Monday 20 August 2018 06:21:36 -0400 (0:00:00.044) 0:02:19.005 ********* >2018-08-20 06:21:36,996 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:37,017 p=1013 u=mistral | TASK [Force remove deployed file for CephStorageHostsDeployment] *************** >2018-08-20 06:21:37,017 p=1013 u=mistral | Monday 20 August 2018 06:21:37 -0400 (0:00:00.042) 0:02:19.047 ********* >2018-08-20 06:21:37,036 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:37,058 p=1013 u=mistral | TASK [Run deployment CephStorageHostsDeployment] ******************************* >2018-08-20 06:21:37,058 p=1013 u=mistral | Monday 20 August 2018 06:21:37 -0400 (0:00:00.040) 0:02:19.088 ********* >2018-08-20 06:21:37,665 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.notify.json)", "delta": "0:00:00.404670", "end": "2018-08-20 06:21:37.621229", "rc": 0, "start": "2018-08-20 06:21:37.216559", "stderr": "[2018-08-20 06:21:37,238] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.json\n[2018-08-20 06:21:37,281] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:37,281] (heat-config) [DEBUG] [2018-08-20 06:21:37,256] (heat-config) [INFO] hosts=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane\n[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca\n[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-exbsto2bleo7-0-ce627kkwvi3f/d4097dd4-42d2-421c-850a-1824f2428be0\n[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:21:37,257] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/13055796-5cf9-42fd-933e-edcaf826b985\n[2018-08-20 06:21:37,278] (heat-config) [INFO] \n[2018-08-20 06:21:37,278] (heat-config) [DEBUG] + set -o pipefail\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\n++ hostname -s\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ local file=/etc/hosts\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ '[' '!' -f /etc/hosts ']'\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\n++ hostname -s\n+ sed -i /ceph-0/d /etc/hosts\n+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\n172.17.3.21 overcloud.storage.localdomain\n172.17.4.15 overcloud.storagemgmt.localdomain\n172.17.1.24 overcloud.internalapi.localdomain\n10.0.0.112 overcloud.localdomain\n172.17.1.16 controller-0.localdomain controller-0\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\n10.0.0.105 controller-0.external.localdomain controller-0.external\n192.168.24.12 controller-0.management.localdomain controller-0.management\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\n\n172.17.1.25 compute-0.localdomain compute-0\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\n192.168.24.13 compute-0.external.localdomain compute-0.external\n192.168.24.13 compute-0.management.localdomain compute-0.management\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\n\n\n\n172.17.3.10 ceph-0.localdomain ceph-0\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\n+ echo -ne '# HEAT_HOSTS_END\\n\\n'\n\n[2018-08-20 06:21:37,278] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/13055796-5cf9-42fd-933e-edcaf826b985\n\n[2018-08-20 06:21:37,282] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:21:37,282] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.json < /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.notify.json\n[2018-08-20 06:21:37,615] (heat-config) [INFO] \n[2018-08-20 06:21:37,615] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:37,238] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.json", "[2018-08-20 06:21:37,281] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:37,281] (heat-config) [DEBUG] [2018-08-20 06:21:37,256] (heat-config) [INFO] hosts=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-exbsto2bleo7-0-ce627kkwvi3f/d4097dd4-42d2-421c-850a-1824f2428be0", "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:21:37,257] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/13055796-5cf9-42fd-933e-edcaf826b985", "[2018-08-20 06:21:37,278] (heat-config) [INFO] ", "[2018-08-20 06:21:37,278] (heat-config) [DEBUG] + set -o pipefail", "+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.debian.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/cloud/templates/hosts.suse.tmpl", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", "++ hostname -s", "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ local file=/etc/hosts", "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ '[' '!' -f /etc/hosts ']'", "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", "++ hostname -s", "+ sed -i /ceph-0/d /etc/hosts", "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", "172.17.3.21 overcloud.storage.localdomain", "172.17.4.15 overcloud.storagemgmt.localdomain", "172.17.1.24 overcloud.internalapi.localdomain", "10.0.0.112 overcloud.localdomain", "172.17.1.16 controller-0.localdomain controller-0", "172.17.3.14 controller-0.storage.localdomain controller-0.storage", "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", "10.0.0.105 controller-0.external.localdomain controller-0.external", "192.168.24.12 controller-0.management.localdomain controller-0.management", "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", "", "172.17.1.25 compute-0.localdomain compute-0", "172.17.3.28 compute-0.storage.localdomain compute-0.storage", "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", "192.168.24.13 compute-0.external.localdomain compute-0.external", "192.168.24.13 compute-0.management.localdomain compute-0.management", "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", "", "", "", "172.17.3.10 ceph-0.localdomain ceph-0", "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", "192.168.24.16 ceph-0.external.localdomain ceph-0.external", "192.168.24.16 ceph-0.management.localdomain ceph-0.management", "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", "", "[2018-08-20 06:21:37,278] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/13055796-5cf9-42fd-933e-edcaf826b985", "", "[2018-08-20 06:21:37,282] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:21:37,282] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.json < /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.notify.json", "[2018-08-20 06:21:37,615] (heat-config) [INFO] ", "[2018-08-20 06:21:37,615] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:37,703 p=1013 u=mistral | TASK [Output for CephStorageHostsDeployment] *********************************** >2018-08-20 06:21:37,703 p=1013 u=mistral | Monday 20 August 2018 06:21:37 -0400 (0:00:00.644) 0:02:19.733 ********* >2018-08-20 06:21:37,790 p=1013 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:37,238] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.json", > "[2018-08-20 06:21:37,281] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"+ set -o pipefail\\n+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.debian.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.freebsd.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.redhat.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'\\n+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/cloud/templates/hosts.suse.tmpl\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ local file=/etc/hosts\\n+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ '[' '!' -f /etc/hosts ']'\\n+ grep -q '^# HEAT_HOSTS_START' /etc/hosts\\n++ hostname -s\\n+ sed -i /ceph-0/d /etc/hosts\\n+ echo -ne '\\\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\\\n'\\n+ echo '192.168.24.19 overcloud.ctlplane.localdomain\\n172.17.3.21 overcloud.storage.localdomain\\n172.17.4.15 overcloud.storagemgmt.localdomain\\n172.17.1.24 overcloud.internalapi.localdomain\\n10.0.0.112 overcloud.localdomain\\n172.17.1.16 controller-0.localdomain controller-0\\n172.17.3.14 controller-0.storage.localdomain controller-0.storage\\n172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt\\n172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi\\n172.17.2.26 controller-0.tenant.localdomain controller-0.tenant\\n10.0.0.105 controller-0.external.localdomain controller-0.external\\n192.168.24.12 controller-0.management.localdomain controller-0.management\\n192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane\\n\\n172.17.1.25 compute-0.localdomain compute-0\\n172.17.3.28 compute-0.storage.localdomain compute-0.storage\\n192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt\\n172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi\\n172.17.2.19 compute-0.tenant.localdomain compute-0.tenant\\n192.168.24.13 compute-0.external.localdomain compute-0.external\\n192.168.24.13 compute-0.management.localdomain compute-0.management\\n192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane\\n\\n\\n\\n172.17.3.10 ceph-0.localdomain ceph-0\\n172.17.3.10 ceph-0.storage.localdomain ceph-0.storage\\n172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt\\n192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi\\n192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant\\n192.168.24.16 ceph-0.external.localdomain ceph-0.external\\n192.168.24.16 ceph-0.management.localdomain ceph-0.management\\n192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'\\n+ echo -ne '# HEAT_HOSTS_END\\\\n\\\\n'\\n\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:37,281] (heat-config) [DEBUG] [2018-08-20 06:21:37,256] (heat-config) [INFO] hosts=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane", > "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", > "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageHostsDeployment-exbsto2bleo7-0-ce627kkwvi3f/d4097dd4-42d2-421c-850a-1824f2428be0", > "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:21:37,257] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:21:37,257] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/13055796-5cf9-42fd-933e-edcaf826b985", > "[2018-08-20 06:21:37,278] (heat-config) [INFO] ", > "[2018-08-20 06:21:37,278] (heat-config) [DEBUG] + set -o pipefail", > "+ '[' '!' -z '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane' ']'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.debian.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.debian.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.debian.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.debian.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.debian.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.freebsd.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.freebsd.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.freebsd.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.freebsd.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.freebsd.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.redhat.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.redhat.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.redhat.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.redhat.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.redhat.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ for tmpl in '/etc/cloud/templates/hosts.*.tmpl'", > "+ write_entries /etc/cloud/templates/hosts.suse.tmpl '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/cloud/templates/hosts.suse.tmpl", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/cloud/templates/hosts.suse.tmpl ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/cloud/templates/hosts.suse.tmpl", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/cloud/templates/hosts.suse.tmpl", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "+ write_entries /etc/hosts '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ local file=/etc/hosts", > "+ local 'entries=192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ '[' '!' -f /etc/hosts ']'", > "+ grep -q '^# HEAT_HOSTS_START' /etc/hosts", > "++ hostname -s", > "+ sed -i /ceph-0/d /etc/hosts", > "+ echo -ne '\\n# HEAT_HOSTS_START - Do not edit manually within this section!\\n'", > "+ echo '192.168.24.19 overcloud.ctlplane.localdomain", > "172.17.3.21 overcloud.storage.localdomain", > "172.17.4.15 overcloud.storagemgmt.localdomain", > "172.17.1.24 overcloud.internalapi.localdomain", > "10.0.0.112 overcloud.localdomain", > "172.17.1.16 controller-0.localdomain controller-0", > "172.17.3.14 controller-0.storage.localdomain controller-0.storage", > "172.17.4.12 controller-0.storagemgmt.localdomain controller-0.storagemgmt", > "172.17.1.16 controller-0.internalapi.localdomain controller-0.internalapi", > "172.17.2.26 controller-0.tenant.localdomain controller-0.tenant", > "10.0.0.105 controller-0.external.localdomain controller-0.external", > "192.168.24.12 controller-0.management.localdomain controller-0.management", > "192.168.24.12 controller-0.ctlplane.localdomain controller-0.ctlplane", > "", > "172.17.1.25 compute-0.localdomain compute-0", > "172.17.3.28 compute-0.storage.localdomain compute-0.storage", > "192.168.24.13 compute-0.storagemgmt.localdomain compute-0.storagemgmt", > "172.17.1.25 compute-0.internalapi.localdomain compute-0.internalapi", > "172.17.2.19 compute-0.tenant.localdomain compute-0.tenant", > "192.168.24.13 compute-0.external.localdomain compute-0.external", > "192.168.24.13 compute-0.management.localdomain compute-0.management", > "192.168.24.13 compute-0.ctlplane.localdomain compute-0.ctlplane", > "", > "", > "", > "172.17.3.10 ceph-0.localdomain ceph-0", > "172.17.3.10 ceph-0.storage.localdomain ceph-0.storage", > "172.17.4.10 ceph-0.storagemgmt.localdomain ceph-0.storagemgmt", > "192.168.24.16 ceph-0.internalapi.localdomain ceph-0.internalapi", > "192.168.24.16 ceph-0.tenant.localdomain ceph-0.tenant", > "192.168.24.16 ceph-0.external.localdomain ceph-0.external", > "192.168.24.16 ceph-0.management.localdomain ceph-0.management", > "192.168.24.16 ceph-0.ctlplane.localdomain ceph-0.ctlplane'", > "+ echo -ne '# HEAT_HOSTS_END\\n\\n'", > "", > "[2018-08-20 06:21:37,278] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/13055796-5cf9-42fd-933e-edcaf826b985", > "", > "[2018-08-20 06:21:37,282] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:21:37,282] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.json < /var/lib/heat-config/deployed/13055796-5cf9-42fd-933e-edcaf826b985.notify.json", > "[2018-08-20 06:21:37,615] (heat-config) [INFO] ", > "[2018-08-20 06:21:37,615] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:37,827 p=1013 u=mistral | TASK [Check-mode for Run deployment CephStorageHostsDeployment] **************** >2018-08-20 06:21:37,827 p=1013 u=mistral | Monday 20 August 2018 06:21:37 -0400 (0:00:00.124) 0:02:19.857 ********* >2018-08-20 06:21:37,845 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:37,865 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:37,865 p=1013 u=mistral | Monday 20 August 2018 06:21:37 -0400 (0:00:00.038) 0:02:19.895 ********* >2018-08-20 06:21:38,085 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "5c5f9138-bd4d-4b76-8c62-dc15b893c155"}, "changed": false} >2018-08-20 06:21:38,108 p=1013 u=mistral | TASK [Render deployment file for CephStorageAllNodesDeployment] **************** >2018-08-20 06:21:38,109 p=1013 u=mistral | Monday 20 August 2018 06:21:38 -0400 (0:00:00.243) 0:02:20.139 ********* >2018-08-20 06:21:38,742 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "e42dac0f6226d6b38fd77ddae0b65900e3585864", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesDeployment-5c5f9138-bd4d-4b76-8c62-dc15b893c155", "gid": 0, "group": "root", "md5sum": "4f75b8561a59aa13b6daf9ad16e90dab", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 19159, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760498.32-76873144052953/source", "state": "file", "uid": 0} >2018-08-20 06:21:38,762 p=1013 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesDeployment] ********* >2018-08-20 06:21:38,763 p=1013 u=mistral | Monday 20 August 2018 06:21:38 -0400 (0:00:00.653) 0:02:20.793 ********* >2018-08-20 06:21:39,023 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:39,047 p=1013 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesDeployment] ********** >2018-08-20 06:21:39,047 p=1013 u=mistral | Monday 20 August 2018 06:21:39 -0400 (0:00:00.284) 0:02:21.077 ********* >2018-08-20 06:21:39,067 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:39,089 p=1013 u=mistral | TASK [Remove deployed file for CephStorageAllNodesDeployment when previous deployment failed] *** >2018-08-20 06:21:39,089 p=1013 u=mistral | Monday 20 August 2018 06:21:39 -0400 (0:00:00.042) 0:02:21.119 ********* >2018-08-20 06:21:39,109 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:39,129 p=1013 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesDeployment] ************ >2018-08-20 06:21:39,129 p=1013 u=mistral | Monday 20 August 2018 06:21:39 -0400 (0:00:00.039) 0:02:21.159 ********* >2018-08-20 06:21:39,145 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:39,167 p=1013 u=mistral | TASK [Run deployment CephStorageAllNodesDeployment] **************************** >2018-08-20 06:21:39,167 p=1013 u=mistral | Monday 20 August 2018 06:21:39 -0400 (0:00:00.038) 0:02:21.197 ********* >2018-08-20 06:21:39,930 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.notify.json)", "delta": "0:00:00.498738", "end": "2018-08-20 06:21:39.913389", "rc": 0, "start": "2018-08-20 06:21:39.414651", "stderr": "[2018-08-20 06:21:39,442] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.json\n[2018-08-20 06:21:39,559] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:39,560] (heat-config) [DEBUG] \n[2018-08-20 06:21:39,560] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera\n[2018-08-20 06:21:39,560] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.json < /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.notify.json\n[2018-08-20 06:21:39,907] (heat-config) [INFO] \n[2018-08-20 06:21:39,908] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:39,442] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.json", "[2018-08-20 06:21:39,559] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:39,560] (heat-config) [DEBUG] ", "[2018-08-20 06:21:39,560] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", "[2018-08-20 06:21:39,560] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.json < /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.notify.json", "[2018-08-20 06:21:39,907] (heat-config) [INFO] ", "[2018-08-20 06:21:39,908] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:39,952 p=1013 u=mistral | TASK [Output for CephStorageAllNodesDeployment] ******************************** >2018-08-20 06:21:39,953 p=1013 u=mistral | Monday 20 August 2018 06:21:39 -0400 (0:00:00.785) 0:02:21.982 ********* >2018-08-20 06:21:40,059 p=1013 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:39,442] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/hiera < /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.json", > "[2018-08-20 06:21:39,559] (heat-config) [INFO] {\"deploy_stdout\": \"\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:39,560] (heat-config) [DEBUG] ", > "[2018-08-20 06:21:39,560] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/hiera", > "[2018-08-20 06:21:39,560] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.json < /var/lib/heat-config/deployed/5c5f9138-bd4d-4b76-8c62-dc15b893c155.notify.json", > "[2018-08-20 06:21:39,907] (heat-config) [INFO] ", > "[2018-08-20 06:21:39,908] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:40,119 p=1013 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesDeployment] ************* >2018-08-20 06:21:40,119 p=1013 u=mistral | Monday 20 August 2018 06:21:40 -0400 (0:00:00.166) 0:02:22.149 ********* >2018-08-20 06:21:40,135 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:40,152 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:40,152 p=1013 u=mistral | Monday 20 August 2018 06:21:40 -0400 (0:00:00.033) 0:02:22.182 ********* >2018-08-20 06:21:40,208 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "c445f4cc-e759-44f1-9837-068a81774ac9"}, "changed": false} >2018-08-20 06:21:40,228 p=1013 u=mistral | TASK [Render deployment file for CephStorageAllNodesValidationDeployment] ****** >2018-08-20 06:21:40,228 p=1013 u=mistral | Monday 20 August 2018 06:21:40 -0400 (0:00:00.076) 0:02:22.258 ********* >2018-08-20 06:21:40,679 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "8350ea04dc54925176e2f04ef067bc4f147ed459", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageAllNodesValidationDeployment-c445f4cc-e759-44f1-9837-068a81774ac9", "gid": 0, "group": "root", "md5sum": "a1022732cd1a9df1101b2408e8339be1", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 4943, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760500.29-278102107651434/source", "state": "file", "uid": 0} >2018-08-20 06:21:40,716 p=1013 u=mistral | TASK [Check if deployed file exists for CephStorageAllNodesValidationDeployment] *** >2018-08-20 06:21:40,717 p=1013 u=mistral | Monday 20 August 2018 06:21:40 -0400 (0:00:00.488) 0:02:22.747 ********* >2018-08-20 06:21:40,948 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:40,968 p=1013 u=mistral | TASK [Check previous deployment rc for CephStorageAllNodesValidationDeployment] *** >2018-08-20 06:21:40,968 p=1013 u=mistral | Monday 20 August 2018 06:21:40 -0400 (0:00:00.251) 0:02:22.998 ********* >2018-08-20 06:21:40,994 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:41,014 p=1013 u=mistral | TASK [Remove deployed file for CephStorageAllNodesValidationDeployment when previous deployment failed] *** >2018-08-20 06:21:41,015 p=1013 u=mistral | Monday 20 August 2018 06:21:41 -0400 (0:00:00.046) 0:02:23.044 ********* >2018-08-20 06:21:41,043 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:41,071 p=1013 u=mistral | TASK [Force remove deployed file for CephStorageAllNodesValidationDeployment] *** >2018-08-20 06:21:41,071 p=1013 u=mistral | Monday 20 August 2018 06:21:41 -0400 (0:00:00.056) 0:02:23.101 ********* >2018-08-20 06:21:41,098 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:41,124 p=1013 u=mistral | TASK [Run deployment CephStorageAllNodesValidationDeployment] ****************** >2018-08-20 06:21:41,127 p=1013 u=mistral | Monday 20 August 2018 06:21:41 -0400 (0:00:00.053) 0:02:23.155 ********* >2018-08-20 06:21:42,381 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.notify.json)", "delta": "0:00:00.973331", "end": "2018-08-20 06:21:42.357314", "rc": 0, "start": "2018-08-20 06:21:41.383983", "stderr": "[2018-08-20 06:21:41,409] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.json\n[2018-08-20 06:21:41,980] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.105 for local network 10.0.0.0/24.\\nPing to 10.0.0.105 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\\nPing to 172.17.3.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.12 for local network 172.17.4.0/24.\\nPing to 172.17.4.12 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:41,980] (heat-config) [DEBUG] [2018-08-20 06:21:41,430] (heat-config) [INFO] ping_test_ips=172.17.3.14 172.17.4.12 172.17.1.16 172.17.2.26 10.0.0.105 192.168.24.12\n[2018-08-20 06:21:41,431] (heat-config) [INFO] validate_fqdn=False\n[2018-08-20 06:21:41,431] (heat-config) [INFO] validate_ntp=True\n[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca\n[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-7hrwc7pp4375-0-2ci3oiap6yac/fe64ff5e-7c9a-435a-bb35-17c05b27d48e\n[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:21:41,431] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c445f4cc-e759-44f1-9837-068a81774ac9\n[2018-08-20 06:21:41,976] (heat-config) [INFO] Trying to ping 10.0.0.105 for local network 10.0.0.0/24.\nPing to 10.0.0.105 succeeded.\nSUCCESS\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\nPing to 172.17.3.14 succeeded.\nSUCCESS\nTrying to ping 172.17.4.12 for local network 172.17.4.0/24.\nPing to 172.17.4.12 succeeded.\nSUCCESS\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\nPing to 192.168.24.12 succeeded.\nSUCCESS\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\nSUCCESS\n\n[2018-08-20 06:21:41,976] (heat-config) [DEBUG] \n[2018-08-20 06:21:41,976] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c445f4cc-e759-44f1-9837-068a81774ac9\n\n[2018-08-20 06:21:41,980] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:21:41,981] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.json < /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.notify.json\n[2018-08-20 06:21:42,351] (heat-config) [INFO] \n[2018-08-20 06:21:42,351] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:41,409] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.json", "[2018-08-20 06:21:41,980] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.105 for local network 10.0.0.0/24.\\nPing to 10.0.0.105 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\\nPing to 172.17.3.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.12 for local network 172.17.4.0/24.\\nPing to 172.17.4.12 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:41,980] (heat-config) [DEBUG] [2018-08-20 06:21:41,430] (heat-config) [INFO] ping_test_ips=172.17.3.14 172.17.4.12 172.17.1.16 172.17.2.26 10.0.0.105 192.168.24.12", "[2018-08-20 06:21:41,431] (heat-config) [INFO] validate_fqdn=False", "[2018-08-20 06:21:41,431] (heat-config) [INFO] validate_ntp=True", "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-7hrwc7pp4375-0-2ci3oiap6yac/fe64ff5e-7c9a-435a-bb35-17c05b27d48e", "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:21:41,431] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c445f4cc-e759-44f1-9837-068a81774ac9", "[2018-08-20 06:21:41,976] (heat-config) [INFO] Trying to ping 10.0.0.105 for local network 10.0.0.0/24.", "Ping to 10.0.0.105 succeeded.", "SUCCESS", "Trying to ping 172.17.3.14 for local network 172.17.3.0/24.", "Ping to 172.17.3.14 succeeded.", "SUCCESS", "Trying to ping 172.17.4.12 for local network 172.17.4.0/24.", "Ping to 172.17.4.12 succeeded.", "SUCCESS", "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", "Ping to 192.168.24.12 succeeded.", "SUCCESS", "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", "SUCCESS", "", "[2018-08-20 06:21:41,976] (heat-config) [DEBUG] ", "[2018-08-20 06:21:41,976] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c445f4cc-e759-44f1-9837-068a81774ac9", "", "[2018-08-20 06:21:41,980] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:21:41,981] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.json < /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.notify.json", "[2018-08-20 06:21:42,351] (heat-config) [INFO] ", "[2018-08-20 06:21:42,351] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:42,402 p=1013 u=mistral | TASK [Output for CephStorageAllNodesValidationDeployment] ********************** >2018-08-20 06:21:42,402 p=1013 u=mistral | Monday 20 August 2018 06:21:42 -0400 (0:00:01.275) 0:02:24.432 ********* >2018-08-20 06:21:42,452 p=1013 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:41,409] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.json", > "[2018-08-20 06:21:41,980] (heat-config) [INFO] {\"deploy_stdout\": \"Trying to ping 10.0.0.105 for local network 10.0.0.0/24.\\nPing to 10.0.0.105 succeeded.\\nSUCCESS\\nTrying to ping 172.17.3.14 for local network 172.17.3.0/24.\\nPing to 172.17.3.14 succeeded.\\nSUCCESS\\nTrying to ping 172.17.4.12 for local network 172.17.4.0/24.\\nPing to 172.17.4.12 succeeded.\\nSUCCESS\\nTrying to ping 192.168.24.12 for local network 192.168.24.0/24.\\nPing to 192.168.24.12 succeeded.\\nSUCCESS\\nTrying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.\\nTrying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.\\nSUCCESS\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:41,980] (heat-config) [DEBUG] [2018-08-20 06:21:41,430] (heat-config) [INFO] ping_test_ips=172.17.3.14 172.17.4.12 172.17.1.16 172.17.2.26 10.0.0.105 192.168.24.12", > "[2018-08-20 06:21:41,431] (heat-config) [INFO] validate_fqdn=False", > "[2018-08-20 06:21:41,431] (heat-config) [INFO] validate_ntp=True", > "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", > "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_stack_id=overcloud-CephStorageAllNodesValidationDeployment-7hrwc7pp4375-0-2ci3oiap6yac/fe64ff5e-7c9a-435a-bb35-17c05b27d48e", > "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:21:41,431] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:21:41,431] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/c445f4cc-e759-44f1-9837-068a81774ac9", > "[2018-08-20 06:21:41,976] (heat-config) [INFO] Trying to ping 10.0.0.105 for local network 10.0.0.0/24.", > "Ping to 10.0.0.105 succeeded.", > "SUCCESS", > "Trying to ping 172.17.3.14 for local network 172.17.3.0/24.", > "Ping to 172.17.3.14 succeeded.", > "SUCCESS", > "Trying to ping 172.17.4.12 for local network 172.17.4.0/24.", > "Ping to 172.17.4.12 succeeded.", > "SUCCESS", > "Trying to ping 192.168.24.12 for local network 192.168.24.0/24.", > "Ping to 192.168.24.12 succeeded.", > "SUCCESS", > "Trying to ping default gateway 192.168.24.1...Ping to 192.168.24.1 succeeded.", > "Trying to ping default gateway 10.0.0.1...Ping to 10.0.0.1 succeeded.", > "SUCCESS", > "", > "[2018-08-20 06:21:41,976] (heat-config) [DEBUG] ", > "[2018-08-20 06:21:41,976] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/c445f4cc-e759-44f1-9837-068a81774ac9", > "", > "[2018-08-20 06:21:41,980] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:21:41,981] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.json < /var/lib/heat-config/deployed/c445f4cc-e759-44f1-9837-068a81774ac9.notify.json", > "[2018-08-20 06:21:42,351] (heat-config) [INFO] ", > "[2018-08-20 06:21:42,351] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:42,474 p=1013 u=mistral | TASK [Check-mode for Run deployment CephStorageAllNodesValidationDeployment] *** >2018-08-20 06:21:42,474 p=1013 u=mistral | Monday 20 August 2018 06:21:42 -0400 (0:00:00.071) 0:02:24.504 ********* >2018-08-20 06:21:42,489 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:42,507 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:42,508 p=1013 u=mistral | Monday 20 August 2018 06:21:42 -0400 (0:00:00.033) 0:02:24.537 ********* >2018-08-20 06:21:42,578 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "1c8dc5cd-3f16-4984-875f-57f0a2e2205b"}, "changed": false} >2018-08-20 06:21:42,599 p=1013 u=mistral | TASK [Render deployment file for CephStorageHostPrepDeployment] **************** >2018-08-20 06:21:42,600 p=1013 u=mistral | Monday 20 August 2018 06:21:42 -0400 (0:00:00.092) 0:02:24.630 ********* >2018-08-20 06:21:43,075 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "28f3a5d985b96153534272450a76033cbe5b99f9", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageHostPrepDeployment-1c8dc5cd-3f16-4984-875f-57f0a2e2205b", "gid": 0, "group": "root", "md5sum": "ed846bb73e9b2fca370f2e3203858136", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 20022, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760502.67-76607448111251/source", "state": "file", "uid": 0} >2018-08-20 06:21:43,095 p=1013 u=mistral | TASK [Check if deployed file exists for CephStorageHostPrepDeployment] ********* >2018-08-20 06:21:43,095 p=1013 u=mistral | Monday 20 August 2018 06:21:43 -0400 (0:00:00.495) 0:02:25.125 ********* >2018-08-20 06:21:43,277 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:43,298 p=1013 u=mistral | TASK [Check previous deployment rc for CephStorageHostPrepDeployment] ********** >2018-08-20 06:21:43,298 p=1013 u=mistral | Monday 20 August 2018 06:21:43 -0400 (0:00:00.203) 0:02:25.328 ********* >2018-08-20 06:21:43,318 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:43,337 p=1013 u=mistral | TASK [Remove deployed file for CephStorageHostPrepDeployment when previous deployment failed] *** >2018-08-20 06:21:43,337 p=1013 u=mistral | Monday 20 August 2018 06:21:43 -0400 (0:00:00.038) 0:02:25.367 ********* >2018-08-20 06:21:43,355 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:43,374 p=1013 u=mistral | TASK [Force remove deployed file for CephStorageHostPrepDeployment] ************ >2018-08-20 06:21:43,374 p=1013 u=mistral | Monday 20 August 2018 06:21:43 -0400 (0:00:00.037) 0:02:25.404 ********* >2018-08-20 06:21:43,392 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:43,411 p=1013 u=mistral | TASK [Run deployment CephStorageHostPrepDeployment] **************************** >2018-08-20 06:21:43,411 p=1013 u=mistral | Monday 20 August 2018 06:21:43 -0400 (0:00:00.036) 0:02:25.441 ********* >2018-08-20 06:21:49,350 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.notify.json)", "delta": "0:00:05.737973", "end": "2018-08-20 06:21:49.328382", "rc": 0, "start": "2018-08-20 06:21:43.590409", "stderr": "[2018-08-20 06:21:43,615] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.json\n[2018-08-20 06:21:48,948] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:48,948] (heat-config) [DEBUG] [2018-08-20 06:21:43,636] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/1c8dc5cd-3f16-4984-875f-57f0a2e2205b_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/1c8dc5cd-3f16-4984-875f-57f0a2e2205b_variables.json\n[2018-08-20 06:21:48,944] (heat-config) [INFO] Return code 0\n[2018-08-20 06:21:48,944] (heat-config) [INFO] \nPLAY [localhost] ***************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [localhost]\n\nTASK [Create /var/lib/docker-puppet] *******************************************\nchanged: [localhost]\n\nTASK [Write docker-puppet.py] **************************************************\nchanged: [localhost]\n\nPLAY RECAP *********************************************************************\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \n\n\n[2018-08-20 06:21:48,944] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/1c8dc5cd-3f16-4984-875f-57f0a2e2205b_playbook.yaml\n\n[2018-08-20 06:21:48,948] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible\n[2018-08-20 06:21:48,948] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.json < /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.notify.json\n[2018-08-20 06:21:49,322] (heat-config) [INFO] \n[2018-08-20 06:21:49,322] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:43,615] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.json", "[2018-08-20 06:21:48,948] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:48,948] (heat-config) [DEBUG] [2018-08-20 06:21:43,636] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/1c8dc5cd-3f16-4984-875f-57f0a2e2205b_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/1c8dc5cd-3f16-4984-875f-57f0a2e2205b_variables.json", "[2018-08-20 06:21:48,944] (heat-config) [INFO] Return code 0", "[2018-08-20 06:21:48,944] (heat-config) [INFO] ", "PLAY [localhost] ***************************************************************", "", "TASK [Gathering Facts] *********************************************************", "ok: [localhost]", "", "TASK [Create /var/lib/docker-puppet] *******************************************", "changed: [localhost]", "", "TASK [Write docker-puppet.py] **************************************************", "changed: [localhost]", "", "PLAY RECAP *********************************************************************", "localhost : ok=3 changed=2 unreachable=0 failed=0 ", "", "", "[2018-08-20 06:21:48,944] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/1c8dc5cd-3f16-4984-875f-57f0a2e2205b_playbook.yaml", "", "[2018-08-20 06:21:48,948] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", "[2018-08-20 06:21:48,948] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.json < /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.notify.json", "[2018-08-20 06:21:49,322] (heat-config) [INFO] ", "[2018-08-20 06:21:49,322] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:49,372 p=1013 u=mistral | TASK [Output for CephStorageHostPrepDeployment] ******************************** >2018-08-20 06:21:49,372 p=1013 u=mistral | Monday 20 August 2018 06:21:49 -0400 (0:00:05.961) 0:02:31.402 ********* >2018-08-20 06:21:49,420 p=1013 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:43,615] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/ansible < /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.json", > "[2018-08-20 06:21:48,948] (heat-config) [INFO] {\"deploy_stdout\": \"\\nPLAY [localhost] ***************************************************************\\n\\nTASK [Gathering Facts] *********************************************************\\nok: [localhost]\\n\\nTASK [Create /var/lib/docker-puppet] *******************************************\\nchanged: [localhost]\\n\\nTASK [Write docker-puppet.py] **************************************************\\nchanged: [localhost]\\n\\nPLAY RECAP *********************************************************************\\nlocalhost : ok=3 changed=2 unreachable=0 failed=0 \\n\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:48,948] (heat-config) [DEBUG] [2018-08-20 06:21:43,636] (heat-config) [DEBUG] Running ansible-playbook -i localhost, /var/lib/heat-config/heat-config-ansible/1c8dc5cd-3f16-4984-875f-57f0a2e2205b_playbook.yaml --extra-vars @/var/lib/heat-config/heat-config-ansible/1c8dc5cd-3f16-4984-875f-57f0a2e2205b_variables.json", > "[2018-08-20 06:21:48,944] (heat-config) [INFO] Return code 0", > "[2018-08-20 06:21:48,944] (heat-config) [INFO] ", > "PLAY [localhost] ***************************************************************", > "", > "TASK [Gathering Facts] *********************************************************", > "ok: [localhost]", > "", > "TASK [Create /var/lib/docker-puppet] *******************************************", > "changed: [localhost]", > "", > "TASK [Write docker-puppet.py] **************************************************", > "changed: [localhost]", > "", > "PLAY RECAP *********************************************************************", > "localhost : ok=3 changed=2 unreachable=0 failed=0 ", > "", > "", > "[2018-08-20 06:21:48,944] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-ansible/1c8dc5cd-3f16-4984-875f-57f0a2e2205b_playbook.yaml", > "", > "[2018-08-20 06:21:48,948] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/ansible", > "[2018-08-20 06:21:48,948] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.json < /var/lib/heat-config/deployed/1c8dc5cd-3f16-4984-875f-57f0a2e2205b.notify.json", > "[2018-08-20 06:21:49,322] (heat-config) [INFO] ", > "[2018-08-20 06:21:49,322] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:49,444 p=1013 u=mistral | TASK [Check-mode for Run deployment CephStorageHostPrepDeployment] ************* >2018-08-20 06:21:49,444 p=1013 u=mistral | Monday 20 August 2018 06:21:49 -0400 (0:00:00.072) 0:02:31.474 ********* >2018-08-20 06:21:49,465 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:49,484 p=1013 u=mistral | TASK [Lookup deployment UUID] ************************************************** >2018-08-20 06:21:49,484 p=1013 u=mistral | Monday 20 August 2018 06:21:49 -0400 (0:00:00.039) 0:02:31.514 ********* >2018-08-20 06:21:49,544 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"deployment_uuid": "3946e2bc-98a7-4ee8-bbd9-625180f67539"}, "changed": false} >2018-08-20 06:21:49,566 p=1013 u=mistral | TASK [Render deployment file for CephStorageArtifactsDeploy] ******************* >2018-08-20 06:21:49,566 p=1013 u=mistral | Monday 20 August 2018 06:21:49 -0400 (0:00:00.081) 0:02:31.596 ********* >2018-08-20 06:21:50,075 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "e7b4f635caf6d0e89a9aabe1a994e55160831a7c", "dest": "/var/lib/heat-config/tripleo-config-download/CephStorageArtifactsDeploy-3946e2bc-98a7-4ee8-bbd9-625180f67539", "gid": 0, "group": "root", "md5sum": "6a3b1eec168c08951b2483e9aa657f77", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2023, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760509.62-238849114393457/source", "state": "file", "uid": 0} >2018-08-20 06:21:50,099 p=1013 u=mistral | TASK [Check if deployed file exists for CephStorageArtifactsDeploy] ************ >2018-08-20 06:21:50,099 p=1013 u=mistral | Monday 20 August 2018 06:21:50 -0400 (0:00:00.532) 0:02:32.129 ********* >2018-08-20 06:21:50,290 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:21:50,310 p=1013 u=mistral | TASK [Check previous deployment rc for CephStorageArtifactsDeploy] ************* >2018-08-20 06:21:50,310 p=1013 u=mistral | Monday 20 August 2018 06:21:50 -0400 (0:00:00.210) 0:02:32.340 ********* >2018-08-20 06:21:50,334 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:50,354 p=1013 u=mistral | TASK [Remove deployed file for CephStorageArtifactsDeploy when previous deployment failed] *** >2018-08-20 06:21:50,354 p=1013 u=mistral | Monday 20 August 2018 06:21:50 -0400 (0:00:00.044) 0:02:32.384 ********* >2018-08-20 06:21:50,372 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:50,393 p=1013 u=mistral | TASK [Force remove deployed file for CephStorageArtifactsDeploy] *************** >2018-08-20 06:21:50,394 p=1013 u=mistral | Monday 20 August 2018 06:21:50 -0400 (0:00:00.039) 0:02:32.424 ********* >2018-08-20 06:21:50,411 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:50,430 p=1013 u=mistral | TASK [Run deployment CephStorageArtifactsDeploy] ******************************* >2018-08-20 06:21:50,430 p=1013 u=mistral | Monday 20 August 2018 06:21:50 -0400 (0:00:00.036) 0:02:32.460 ********* >2018-08-20 06:21:51,034 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": "/usr/libexec/os-refresh-config/configure.d/55-heat-config\n exit $(jq .deploy_status_code /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.notify.json)", "delta": "0:00:00.418045", "end": "2018-08-20 06:21:51.013579", "rc": 0, "start": "2018-08-20 06:21:50.595534", "stderr": "[2018-08-20 06:21:50,621] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.json\n[2018-08-20 06:21:50,649] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}\n[2018-08-20 06:21:50,649] (heat-config) [DEBUG] [2018-08-20 06:21:50,640] (heat-config) [INFO] artifact_urls=\n[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca\n[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_action=CREATE\n[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-7viusabteozk-CephStorageArtifactsDeploy-vsb6gq6jryhd-0-7teibgnz3wd7/f4f914a9-5334-447a-9946-cce626ab3b04\n[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment\n[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL\n[2018-08-20 06:21:50,641] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/3946e2bc-98a7-4ee8-bbd9-625180f67539\n[2018-08-20 06:21:50,646] (heat-config) [INFO] No artifact_urls was set. Skipping...\n\n[2018-08-20 06:21:50,646] (heat-config) [DEBUG] \n[2018-08-20 06:21:50,646] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/3946e2bc-98a7-4ee8-bbd9-625180f67539\n\n[2018-08-20 06:21:50,649] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script\n[2018-08-20 06:21:50,650] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.json < /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.notify.json\n[2018-08-20 06:21:51,008] (heat-config) [INFO] \n[2018-08-20 06:21:51,008] (heat-config) [DEBUG] ", "stderr_lines": ["[2018-08-20 06:21:50,621] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.json", "[2018-08-20 06:21:50,649] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", "[2018-08-20 06:21:50,649] (heat-config) [DEBUG] [2018-08-20 06:21:50,640] (heat-config) [INFO] artifact_urls=", "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_action=CREATE", "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-7viusabteozk-CephStorageArtifactsDeploy-vsb6gq6jryhd-0-7teibgnz3wd7/f4f914a9-5334-447a-9946-cce626ab3b04", "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", "[2018-08-20 06:21:50,641] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/3946e2bc-98a7-4ee8-bbd9-625180f67539", "[2018-08-20 06:21:50,646] (heat-config) [INFO] No artifact_urls was set. Skipping...", "", "[2018-08-20 06:21:50,646] (heat-config) [DEBUG] ", "[2018-08-20 06:21:50,646] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/3946e2bc-98a7-4ee8-bbd9-625180f67539", "", "[2018-08-20 06:21:50,649] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", "[2018-08-20 06:21:50,650] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.json < /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.notify.json", "[2018-08-20 06:21:51,008] (heat-config) [INFO] ", "[2018-08-20 06:21:51,008] (heat-config) [DEBUG] "], "stdout": "", "stdout_lines": []} >2018-08-20 06:21:51,060 p=1013 u=mistral | TASK [Output for CephStorageArtifactsDeploy] *********************************** >2018-08-20 06:21:51,061 p=1013 u=mistral | Monday 20 August 2018 06:21:51 -0400 (0:00:00.630) 0:02:33.090 ********* >2018-08-20 06:21:51,119 p=1013 u=mistral | ok: [ceph-0] => { > "msg": [ > { > "stderr": [ > "[2018-08-20 06:21:50,621] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.json", > "[2018-08-20 06:21:50,649] (heat-config) [INFO] {\"deploy_stdout\": \"No artifact_urls was set. Skipping...\\n\", \"deploy_stderr\": \"\", \"deploy_status_code\": 0}", > "[2018-08-20 06:21:50,649] (heat-config) [DEBUG] [2018-08-20 06:21:50,640] (heat-config) [INFO] artifact_urls=", > "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_server_id=a7efb551-ebaa-420e-ba63-97b84e6a68ca", > "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_action=CREATE", > "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_stack_id=overcloud-AllNodesDeploySteps-7viusabteozk-CephStorageArtifactsDeploy-vsb6gq6jryhd-0-7teibgnz3wd7/f4f914a9-5334-447a-9946-cce626ab3b04", > "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_resource_name=TripleOSoftwareDeployment", > "[2018-08-20 06:21:50,641] (heat-config) [INFO] deploy_signal_transport=NO_SIGNAL", > "[2018-08-20 06:21:50,641] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/3946e2bc-98a7-4ee8-bbd9-625180f67539", > "[2018-08-20 06:21:50,646] (heat-config) [INFO] No artifact_urls was set. Skipping...", > "", > "[2018-08-20 06:21:50,646] (heat-config) [DEBUG] ", > "[2018-08-20 06:21:50,646] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/3946e2bc-98a7-4ee8-bbd9-625180f67539", > "", > "[2018-08-20 06:21:50,649] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script", > "[2018-08-20 06:21:50,650] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.json < /var/lib/heat-config/deployed/3946e2bc-98a7-4ee8-bbd9-625180f67539.notify.json", > "[2018-08-20 06:21:51,008] (heat-config) [INFO] ", > "[2018-08-20 06:21:51,008] (heat-config) [DEBUG] " > ] > }, > { > "status_code": "0" > } > ] >} >2018-08-20 06:21:51,138 p=1013 u=mistral | TASK [Check-mode for Run deployment CephStorageArtifactsDeploy] **************** >2018-08-20 06:21:51,138 p=1013 u=mistral | Monday 20 August 2018 06:21:51 -0400 (0:00:00.077) 0:02:33.168 ********* >2018-08-20 06:21:51,154 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:51,162 p=1013 u=mistral | PLAY [Host prep steps] ********************************************************* >2018-08-20 06:21:51,207 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:21:51,207 p=1013 u=mistral | Monday 20 August 2018 06:21:51 -0400 (0:00:00.068) 0:02:33.237 ********* >2018-08-20 06:21:51,283 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:51,284 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:51,300 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/aodh) => {"changed": false, "item": "/var/log/containers/aodh", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:51,306 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": false, "item": "/var/log/containers/httpd/aodh-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:51,441 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/aodh) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/aodh", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:51,624 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/aodh-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/aodh-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/aodh-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:51,656 p=1013 u=mistral | TASK [aodh logs readme] ******************************************************** >2018-08-20 06:21:51,657 p=1013 u=mistral | Monday 20 August 2018 06:21:51 -0400 (0:00:00.449) 0:02:33.687 ********* >2018-08-20 06:21:51,718 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:51,736 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:52,186 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b6cf6dbe054f430c33d39c1a1a88593536d6e659", "msg": "Destination directory /var/log/aodh does not exist"} >2018-08-20 06:21:52,186 p=1013 u=mistral | ...ignoring >2018-08-20 06:21:52,215 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:21:52,215 p=1013 u=mistral | Monday 20 August 2018 06:21:52 -0400 (0:00:00.558) 0:02:34.245 ********* >2018-08-20 06:21:52,275 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:52,291 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:52,416 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/aodh", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:52,441 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:21:52,442 p=1013 u=mistral | Monday 20 August 2018 06:21:52 -0400 (0:00:00.226) 0:02:34.472 ********* >2018-08-20 06:21:52,500 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:52,516 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:52,636 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:52,662 p=1013 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-08-20 06:21:52,662 p=1013 u=mistral | Monday 20 August 2018 06:21:52 -0400 (0:00:00.220) 0:02:34.692 ********* >2018-08-20 06:21:52,720 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:52,737 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:53,131 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-08-20 06:21:53,131 p=1013 u=mistral | ...ignoring >2018-08-20 06:21:53,154 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:21:53,154 p=1013 u=mistral | Monday 20 August 2018 06:21:53 -0400 (0:00:00.491) 0:02:35.184 ********* >2018-08-20 06:21:53,206 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:53,207 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:53,223 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:53,230 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": false, "item": "/var/log/containers/httpd/cinder-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:53,358 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/cinder) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:53,534 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/cinder-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/cinder-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/cinder-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:53,562 p=1013 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-08-20 06:21:53,562 p=1013 u=mistral | Monday 20 August 2018 06:21:53 -0400 (0:00:00.407) 0:02:35.592 ********* >2018-08-20 06:21:53,619 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:53,633 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:54,114 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "0a3814f5aad089ba842c13ffc2c7bb7a7b3e8292", "msg": "Destination directory /var/log/cinder does not exist"} >2018-08-20 06:21:54,114 p=1013 u=mistral | ...ignoring >2018-08-20 06:21:54,137 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:21:54,138 p=1013 u=mistral | Monday 20 August 2018 06:21:54 -0400 (0:00:00.575) 0:02:36.168 ********* >2018-08-20 06:21:54,200 p=1013 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:54,201 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:54,204 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:54,208 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:54,401 p=1013 u=mistral | changed: [controller-0] => (item=/var/lib/cinder) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:54,572 p=1013 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:54,598 p=1013 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-08-20 06:21:54,598 p=1013 u=mistral | Monday 20 August 2018 06:21:54 -0400 (0:00:00.460) 0:02:36.628 ********* >2018-08-20 06:21:54,713 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:54,732 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:54,880 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:54,909 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:21:54,910 p=1013 u=mistral | Monday 20 August 2018 06:21:54 -0400 (0:00:00.311) 0:02:36.940 ********* >2018-08-20 06:21:55,016 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,041 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,148 p=1013 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:55,172 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:21:55,172 p=1013 u=mistral | Monday 20 August 2018 06:21:55 -0400 (0:00:00.262) 0:02:37.202 ********* >2018-08-20 06:21:55,230 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,232 p=1013 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,251 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,258 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,387 p=1013 u=mistral | ok: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/cinder", "mode": "0755", "owner": "root", "path": "/var/log/containers/cinder", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:55,551 p=1013 u=mistral | ok: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/cinder", "mode": "0755", "owner": "root", "path": "/var/lib/cinder", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:55,575 p=1013 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-08-20 06:21:55,575 p=1013 u=mistral | Monday 20 August 2018 06:21:55 -0400 (0:00:00.402) 0:02:37.605 ********* >2018-08-20 06:21:55,633 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,634 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"cinder_enable_iscsi_backend": false}, "changed": false} >2018-08-20 06:21:55,641 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,665 p=1013 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-08-20 06:21:55,665 p=1013 u=mistral | Monday 20 August 2018 06:21:55 -0400 (0:00:00.090) 0:02:37.695 ********* >2018-08-20 06:21:55,694 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,719 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,731 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,754 p=1013 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-08-20 06:21:55,754 p=1013 u=mistral | Monday 20 August 2018 06:21:55 -0400 (0:00:00.088) 0:02:37.784 ********* >2018-08-20 06:21:55,782 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,807 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,820 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,843 p=1013 u=mistral | TASK [set_fact] **************************************************************** >2018-08-20 06:21:55,843 p=1013 u=mistral | Monday 20 August 2018 06:21:55 -0400 (0:00:00.089) 0:02:37.873 ********* >2018-08-20 06:21:55,915 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >2018-08-20 06:21:55,916 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,933 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:55,956 p=1013 u=mistral | TASK [include_role] ************************************************************ >2018-08-20 06:21:55,956 p=1013 u=mistral | Monday 20 August 2018 06:21:55 -0400 (0:00:00.113) 0:02:37.986 ********* >2018-08-20 06:21:56,012 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:56,030 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:56,107 p=1013 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-08-20 06:21:56,107 p=1013 u=mistral | Monday 20 August 2018 06:21:56 -0400 (0:00:00.150) 0:02:38.137 ********* >2018-08-20 06:21:56,427 p=1013 u=mistral | changed: [controller-0] => {"changed": true} >2018-08-20 06:21:56,462 p=1013 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-08-20 06:21:56,469 p=1013 u=mistral | Monday 20 August 2018 06:21:56 -0400 (0:00:00.362) 0:02:38.499 ********* >2018-08-20 06:21:57,016 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-74.git6e3bb8e.el7.x86_64 providing docker is already installed"]} >2018-08-20 06:21:57,041 p=1013 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-08-20 06:21:57,041 p=1013 u=mistral | Monday 20 August 2018 06:21:57 -0400 (0:00:00.571) 0:02:39.071 ********* >2018-08-20 06:21:57,253 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:57,278 p=1013 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-08-20 06:21:57,278 p=1013 u=mistral | Monday 20 August 2018 06:21:57 -0400 (0:00:00.237) 0:02:39.308 ********* >2018-08-20 06:21:57,609 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-08-20 06:21:57,633 p=1013 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-08-20 06:21:57,633 p=1013 u=mistral | Monday 20 August 2018 06:21:57 -0400 (0:00:00.354) 0:02:39.663 ********* >2018-08-20 06:21:57,961 p=1013 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-08-20 06:21:57,983 p=1013 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-08-20 06:21:57,983 p=1013 u=mistral | Monday 20 August 2018 06:21:57 -0400 (0:00:00.350) 0:02:40.013 ********* >2018-08-20 06:21:58,218 p=1013 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-08-20 06:21:58,239 p=1013 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-08-20 06:21:58,239 p=1013 u=mistral | Monday 20 August 2018 06:21:58 -0400 (0:00:00.256) 0:02:40.269 ********* >2018-08-20 06:21:58,446 p=1013 u=mistral | changed: [controller-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:21:58,484 p=1013 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-08-20 06:21:58,484 p=1013 u=mistral | Monday 20 August 2018 06:21:58 -0400 (0:00:00.244) 0:02:40.514 ********* >2018-08-20 06:21:59,060 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760518.53-239480300395076/source", "state": "file", "uid": 0} >2018-08-20 06:21:59,082 p=1013 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-08-20 06:21:59,082 p=1013 u=mistral | Monday 20 August 2018 06:21:59 -0400 (0:00:00.597) 0:02:41.112 ********* >2018-08-20 06:21:59,293 p=1013 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-08-20 06:21:59,315 p=1013 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-08-20 06:21:59,316 p=1013 u=mistral | Monday 20 August 2018 06:21:59 -0400 (0:00:00.233) 0:02:41.346 ********* >2018-08-20 06:21:59,531 p=1013 u=mistral | changed: [controller-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-08-20 06:21:59,555 p=1013 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-08-20 06:21:59,556 p=1013 u=mistral | Monday 20 August 2018 06:21:59 -0400 (0:00:00.239) 0:02:41.586 ********* >2018-08-20 06:21:59,951 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-08-20 06:21:59,975 p=1013 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-08-20 06:21:59,975 p=1013 u=mistral | Monday 20 August 2018 06:21:59 -0400 (0:00:00.419) 0:02:42.005 ********* >2018-08-20 06:21:59,996 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:21:59,997 p=1013 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-08-20 06:21:59,997 p=1013 u=mistral | Monday 20 August 2018 06:21:59 -0400 (0:00:00.022) 0:02:42.027 ********* >2018-08-20 06:22:00,237 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002900", "end": "2018-08-20 06:22:00.192618", "rc": 0, "start": "2018-08-20 06:22:00.189718", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} >2018-08-20 06:22:00,237 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >2018-08-20 06:22:00,237 p=1013 u=mistral | Monday 20 August 2018 06:22:00 -0400 (0:00:00.240) 0:02:42.267 ********* >2018-08-20 06:22:00,722 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "name": null, "status": {}} >2018-08-20 06:22:00,722 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >2018-08-20 06:22:00,723 p=1013 u=mistral | Monday 20 August 2018 06:22:00 -0400 (0:00:00.484) 0:02:42.752 ********* >2018-08-20 06:22:02,294 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "system.slice systemd-journald.socket network.target docker-storage-setup.service registries.service basic.target rhel-push-plugin.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127799", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service docker-cleanup.timer rhel-push-plugin.socket basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-08-20 06:22:02,295 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >2018-08-20 06:22:02,295 p=1013 u=mistral | Monday 20 August 2018 06:22:02 -0400 (0:00:01.572) 0:02:44.325 ********* >2018-08-20 06:22:02,364 p=1013 u=mistral | Pausing for 10 seconds >2018-08-20 06:22:02,365 p=1013 u=mistral | (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >2018-08-20 06:22:02,365 p=1013 u=mistral | [container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >2018-08-20 06:22:12,367 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-08-20 06:22:02.364382", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-08-20 06:22:12.364573", "user_input": ""} >2018-08-20 06:22:12,368 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >2018-08-20 06:22:12,368 p=1013 u=mistral | Monday 20 August 2018 06:22:12 -0400 (0:00:10.072) 0:02:54.398 ********* >2018-08-20 06:22:12,642 p=1013 u=mistral | changed: [controller-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.041388", "end": "2018-08-20 06:22:12.615610", "rc": 0, "start": "2018-08-20 06:22:12.574222", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} >2018-08-20 06:22:12,668 p=1013 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-08-20 06:22:12,668 p=1013 u=mistral | Monday 20 August 2018 06:22:12 -0400 (0:00:00.300) 0:02:54.698 ********* >2018-08-20 06:22:12,966 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Mon 2018-08-20 06:22:02 EDT", "ActiveEnterTimestampMonotonic": "394027820", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "system.slice systemd-journald.socket network.target docker-storage-setup.service registries.service basic.target rhel-push-plugin.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-08-20 06:22:01 EDT", "AssertTimestampMonotonic": "392840929", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-08-20 06:22:01 EDT", "ConditionTimestampMonotonic": "392840929", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "15145", "ExecMainStartTimestamp": "Mon 2018-08-20 06:22:01 EDT", "ExecMainStartTimestampMonotonic": "392842374", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Mon 2018-08-20 06:22:01 EDT] ; stop_time=[n/a] ; pid=15145 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-08-20 06:22:01 EDT", "InactiveExitTimestampMonotonic": "392842406", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127799", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "15145", "MemoryAccounting": "no", "MemoryCurrent": "67227648", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "registries.service docker-cleanup.timer rhel-push-plugin.socket basic.target", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "26", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Mon 2018-08-20 06:22:02 EDT", "WatchdogTimestampMonotonic": "394027647", "WatchdogUSec": "0"}} >2018-08-20 06:22:12,991 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:12,991 p=1013 u=mistral | Monday 20 August 2018 06:22:12 -0400 (0:00:00.322) 0:02:55.021 ********* >2018-08-20 06:22:13,072 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,076 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/glance) => {"changed": false, "item": "/var/log/containers/glance", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,203 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/glance) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/glance", "mode": "0755", "owner": "root", "path": "/var/log/containers/glance", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:13,229 p=1013 u=mistral | TASK [glance logs readme] ****************************************************** >2018-08-20 06:22:13,229 p=1013 u=mistral | Monday 20 August 2018 06:22:13 -0400 (0:00:00.238) 0:02:55.259 ********* >2018-08-20 06:22:13,289 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,300 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,705 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "e368ae3272baeb19e1113009ea5dae00e797c919", "msg": "Destination directory /var/log/glance does not exist"} >2018-08-20 06:22:13,706 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:13,732 p=1013 u=mistral | TASK [set_fact] **************************************************************** >2018-08-20 06:22:13,732 p=1013 u=mistral | Monday 20 August 2018 06:22:13 -0400 (0:00:00.502) 0:02:55.762 ********* >2018-08-20 06:22:13,764 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,790 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,806 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,830 p=1013 u=mistral | TASK [file] ******************************************************************** >2018-08-20 06:22:13,831 p=1013 u=mistral | Monday 20 August 2018 06:22:13 -0400 (0:00:00.098) 0:02:55.861 ********* >2018-08-20 06:22:13,858 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,884 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,896 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,918 p=1013 u=mistral | TASK [stat] ******************************************************************** >2018-08-20 06:22:13,918 p=1013 u=mistral | Monday 20 August 2018 06:22:13 -0400 (0:00:00.087) 0:02:55.948 ********* >2018-08-20 06:22:13,944 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,968 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:13,980 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,005 p=1013 u=mistral | TASK [copy] ******************************************************************** >2018-08-20 06:22:14,005 p=1013 u=mistral | Monday 20 August 2018 06:22:14 -0400 (0:00:00.086) 0:02:56.035 ********* >2018-08-20 06:22:14,034 p=1013 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,060 p=1013 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,077 p=1013 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u''}) => {"changed": false, "item": {"NETAPP_SHARE": ""}, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,099 p=1013 u=mistral | TASK [mount] ******************************************************************* >2018-08-20 06:22:14,099 p=1013 u=mistral | Monday 20 August 2018 06:22:14 -0400 (0:00:00.093) 0:02:56.129 ********* >2018-08-20 06:22:14,130 p=1013 u=mistral | skipping: [controller-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,158 p=1013 u=mistral | skipping: [compute-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,174 p=1013 u=mistral | skipping: [ceph-0] => (item={u'NETAPP_SHARE': u'', u'NFS_OPTIONS': u'_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0'}) => {"changed": false, "item": {"NETAPP_SHARE": "", "NFS_OPTIONS": "_netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0"}, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,197 p=1013 u=mistral | TASK [Mount NFS on host] ******************************************************* >2018-08-20 06:22:14,197 p=1013 u=mistral | Monday 20 August 2018 06:22:14 -0400 (0:00:00.097) 0:02:56.227 ********* >2018-08-20 06:22:14,224 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,249 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,260 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,283 p=1013 u=mistral | TASK [Mount Node Staging Location] ********************************************* >2018-08-20 06:22:14,283 p=1013 u=mistral | Monday 20 August 2018 06:22:14 -0400 (0:00:00.086) 0:02:56.313 ********* >2018-08-20 06:22:14,313 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,341 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,352 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,375 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:14,376 p=1013 u=mistral | Monday 20 August 2018 06:22:14 -0400 (0:00:00.092) 0:02:56.406 ********* >2018-08-20 06:22:14,438 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,439 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,449 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/gnocchi) => {"changed": false, "item": "/var/log/containers/gnocchi", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,457 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": false, "item": "/var/log/containers/httpd/gnocchi-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,587 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/gnocchi) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/gnocchi", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:14,749 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/gnocchi-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/gnocchi-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/gnocchi-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:14,777 p=1013 u=mistral | TASK [gnocchi logs readme] ***************************************************** >2018-08-20 06:22:14,778 p=1013 u=mistral | Monday 20 August 2018 06:22:14 -0400 (0:00:00.401) 0:02:56.808 ********* >2018-08-20 06:22:14,855 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:14,876 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,313 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "2f6114e0f135d7222e70a07579ab0b2b6f967ff8", "msg": "Destination directory /var/log/gnocchi does not exist"} >2018-08-20 06:22:15,313 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:15,338 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:15,339 p=1013 u=mistral | Monday 20 August 2018 06:22:15 -0400 (0:00:00.560) 0:02:57.369 ********* >2018-08-20 06:22:15,406 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,414 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,565 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/gnocchi", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:15,593 p=1013 u=mistral | TASK [get parameters] ********************************************************** >2018-08-20 06:22:15,594 p=1013 u=mistral | Monday 20 August 2018 06:22:15 -0400 (0:00:00.255) 0:02:57.624 ********* >2018-08-20 06:22:15,686 p=1013 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:22:15,687 p=1013 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:22:15,708 p=1013 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:22:15,731 p=1013 u=mistral | TASK [get DeployedSSLCertificatePath attributes] ******************************* >2018-08-20 06:22:15,731 p=1013 u=mistral | Monday 20 August 2018 06:22:15 -0400 (0:00:00.137) 0:02:57.761 ********* >2018-08-20 06:22:15,761 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,787 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,802 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,824 p=1013 u=mistral | TASK [Assign bootstrap node] *************************************************** >2018-08-20 06:22:15,825 p=1013 u=mistral | Monday 20 August 2018 06:22:15 -0400 (0:00:00.093) 0:02:57.855 ********* >2018-08-20 06:22:15,886 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,891 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,902 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,926 p=1013 u=mistral | TASK [set is_bootstrap_node fact] ********************************************** >2018-08-20 06:22:15,926 p=1013 u=mistral | Monday 20 August 2018 06:22:15 -0400 (0:00:00.101) 0:02:57.956 ********* >2018-08-20 06:22:15,954 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,980 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:15,992 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,013 p=1013 u=mistral | TASK [get haproxy status] ****************************************************** >2018-08-20 06:22:16,014 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.087) 0:02:58.043 ********* >2018-08-20 06:22:16,040 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,064 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,077 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,099 p=1013 u=mistral | TASK [get pacemaker status] **************************************************** >2018-08-20 06:22:16,099 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.085) 0:02:58.129 ********* >2018-08-20 06:22:16,127 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,154 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,165 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,188 p=1013 u=mistral | TASK [get docker status] ******************************************************* >2018-08-20 06:22:16,189 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.089) 0:02:58.219 ********* >2018-08-20 06:22:16,215 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,243 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,254 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,276 p=1013 u=mistral | TASK [get container_id] ******************************************************** >2018-08-20 06:22:16,276 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.087) 0:02:58.306 ********* >2018-08-20 06:22:16,305 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,330 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,342 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,366 p=1013 u=mistral | TASK [get pcs resource name for haproxy container] ***************************** >2018-08-20 06:22:16,366 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.089) 0:02:58.396 ********* >2018-08-20 06:22:16,396 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,421 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,435 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,457 p=1013 u=mistral | TASK [remove DeployedSSLCertificatePath if is dir] ***************************** >2018-08-20 06:22:16,457 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.090) 0:02:58.487 ********* >2018-08-20 06:22:16,485 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,509 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,526 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,552 p=1013 u=mistral | TASK [push certificate content] ************************************************ >2018-08-20 06:22:16,552 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.095) 0:02:58.582 ********* >2018-08-20 06:22:16,580 p=1013 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:22:16,606 p=1013 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:22:16,620 p=1013 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:22:16,643 p=1013 u=mistral | TASK [set certificate ownership] *********************************************** >2018-08-20 06:22:16,643 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.090) 0:02:58.673 ********* >2018-08-20 06:22:16,671 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,696 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,707 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,729 p=1013 u=mistral | TASK [reload haproxy if enabled] *********************************************** >2018-08-20 06:22:16,730 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.086) 0:02:58.759 ********* >2018-08-20 06:22:16,758 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,783 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,795 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,818 p=1013 u=mistral | TASK [restart pacemaker resource for haproxy] ********************************** >2018-08-20 06:22:16,818 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.088) 0:02:58.848 ********* >2018-08-20 06:22:16,884 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,885 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,898 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,922 p=1013 u=mistral | TASK [set kolla_dir fact] ****************************************************** >2018-08-20 06:22:16,922 p=1013 u=mistral | Monday 20 August 2018 06:22:16 -0400 (0:00:00.104) 0:02:58.952 ********* >2018-08-20 06:22:16,952 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,978 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:16,991 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,012 p=1013 u=mistral | TASK [set certificate group on host via container] ***************************** >2018-08-20 06:22:17,013 p=1013 u=mistral | Monday 20 August 2018 06:22:17 -0400 (0:00:00.090) 0:02:59.043 ********* >2018-08-20 06:22:17,040 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,063 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,076 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,097 p=1013 u=mistral | TASK [copy certificate from kolla directory to final location] ***************** >2018-08-20 06:22:17,098 p=1013 u=mistral | Monday 20 August 2018 06:22:17 -0400 (0:00:00.085) 0:02:59.128 ********* >2018-08-20 06:22:17,124 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,152 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,165 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,187 p=1013 u=mistral | TASK [send restart order to haproxy container] ********************************* >2018-08-20 06:22:17,188 p=1013 u=mistral | Monday 20 August 2018 06:22:17 -0400 (0:00:00.089) 0:02:59.217 ********* >2018-08-20 06:22:17,214 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,238 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,249 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,270 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:22:17,271 p=1013 u=mistral | Monday 20 August 2018 06:22:17 -0400 (0:00:00.083) 0:02:59.301 ********* >2018-08-20 06:22:17,324 p=1013 u=mistral | skipping: [compute-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,343 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/haproxy) => {"changed": false, "item": "/var/lib/haproxy", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,533 p=1013 u=mistral | ok: [controller-0] => (item=/var/lib/haproxy) => {"changed": false, "gid": 188, "group": "haproxy", "item": "/var/lib/haproxy", "mode": "0755", "owner": "haproxy", "path": "/var/lib/haproxy", "secontext": "system_u:object_r:haproxy_var_lib_t:s0", "size": 6, "state": "directory", "uid": 188} >2018-08-20 06:22:17,558 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:17,558 p=1013 u=mistral | Monday 20 August 2018 06:22:17 -0400 (0:00:00.287) 0:02:59.588 ********* >2018-08-20 06:22:17,636 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,637 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,648 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,714 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": false, "item": "/var/log/containers/httpd/heat-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:17,806 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/heat) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:17,963 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/heat-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:17,988 p=1013 u=mistral | TASK [heat logs readme] ******************************************************** >2018-08-20 06:22:17,988 p=1013 u=mistral | Monday 20 August 2018 06:22:17 -0400 (0:00:00.430) 0:03:00.018 ********* >2018-08-20 06:22:18,044 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:18,058 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:18,452 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "d30ca3bda176434d31659e7379616dd162ddb246", "msg": "Destination directory /var/log/heat does not exist"} >2018-08-20 06:22:18,452 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:18,478 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:18,478 p=1013 u=mistral | Monday 20 August 2018 06:22:18 -0400 (0:00:00.489) 0:03:00.508 ********* >2018-08-20 06:22:18,540 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:18,543 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:18,554 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/heat) => {"changed": false, "item": "/var/log/containers/heat", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:18,559 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": false, "item": "/var/log/containers/httpd/heat-api-cfn", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:18,687 p=1013 u=mistral | ok: [controller-0] => (item=/var/log/containers/heat) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/heat", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:18,853 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/heat-api-cfn) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/heat-api-cfn", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/heat-api-cfn", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:18,882 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:18,882 p=1013 u=mistral | Monday 20 August 2018 06:22:18 -0400 (0:00:00.403) 0:03:00.912 ********* >2018-08-20 06:22:18,945 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:18,963 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:19,089 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/heat", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:19,112 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:19,112 p=1013 u=mistral | Monday 20 August 2018 06:22:19 -0400 (0:00:00.229) 0:03:01.142 ********* >2018-08-20 06:22:19,167 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:19,172 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:19,191 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/horizon) => {"changed": false, "item": "/var/log/containers/horizon", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:19,197 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/horizon) => {"changed": false, "item": "/var/log/containers/httpd/horizon", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:19,326 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/horizon) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:19,492 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/horizon) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/horizon", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/horizon", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:19,521 p=1013 u=mistral | TASK [horizon logs readme] ***************************************************** >2018-08-20 06:22:19,521 p=1013 u=mistral | Monday 20 August 2018 06:22:19 -0400 (0:00:00.409) 0:03:01.551 ********* >2018-08-20 06:22:19,579 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:19,601 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:19,992 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ac324739761cb36b925d6e309482e26f7fe49b91", "msg": "Destination directory /var/log/horizon does not exist"} >2018-08-20 06:22:19,992 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:20,017 p=1013 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-08-20 06:22:20,017 p=1013 u=mistral | Monday 20 August 2018 06:22:20 -0400 (0:00:00.495) 0:03:02.047 ********* >2018-08-20 06:22:20,085 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:20,100 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:20,241 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"atime": 1534760520.6590235, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1534520507.4778163, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4554196, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "1650827887", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-08-20 06:22:20,267 p=1013 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-08-20 06:22:20,267 p=1013 u=mistral | Monday 20 August 2018 06:22:20 -0400 (0:00:00.250) 0:03:02.297 ********* >2018-08-20 06:22:20,325 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:20,343 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:20,552 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Mon 2018-08-20 06:15:31 EDT", "ActiveEnterTimestampMonotonic": "3440689", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target -.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-08-20 06:15:31 EDT", "AssertTimestampMonotonic": "3440295", "Backlog": "128", "Before": "shutdown.target sockets.target iscsid.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-08-20 06:15:31 EDT", "ConditionTimestampMonotonic": "3440295", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-08-20 06:15:31 EDT", "InactiveExitTimestampMonotonic": "3440689", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "127799", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "127799", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "sockets.target", "Wants": "-.slice"}} >2018-08-20 06:22:20,576 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:20,577 p=1013 u=mistral | Monday 20 August 2018 06:22:20 -0400 (0:00:00.309) 0:03:02.607 ********* >2018-08-20 06:22:20,646 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:20,647 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:20,661 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/keystone) => {"changed": false, "item": "/var/log/containers/keystone", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:20,666 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/keystone) => {"changed": false, "item": "/var/log/containers/httpd/keystone", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:20,795 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/keystone) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:20,960 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/keystone) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/keystone", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/keystone", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:20,985 p=1013 u=mistral | TASK [keystone logs readme] **************************************************** >2018-08-20 06:22:20,985 p=1013 u=mistral | Monday 20 August 2018 06:22:20 -0400 (0:00:00.408) 0:03:03.015 ********* >2018-08-20 06:22:21,041 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:21,059 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:21,487 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "910be882addb6df99267e9bd303f6d9bf658562e", "msg": "Destination directory /var/log/keystone does not exist"} >2018-08-20 06:22:21,487 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:21,512 p=1013 u=mistral | TASK [memcached logs readme] *************************************************** >2018-08-20 06:22:21,512 p=1013 u=mistral | Monday 20 August 2018 06:22:21 -0400 (0:00:00.526) 0:03:03.542 ********* >2018-08-20 06:22:21,574 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:21,596 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:22,019 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "3b6f3952a077d2e5003df30c8c439478917cb6c4", "dest": "/var/log/memcached-readme.txt", "gid": 0, "group": "root", "md5sum": "ffdb1524e5789470856ae32ded4e2f80", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_log_t:s0", "size": 48, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760541.55-109164132639239/source", "state": "file", "uid": 0} >2018-08-20 06:22:22,046 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:22:22,046 p=1013 u=mistral | Monday 20 August 2018 06:22:22 -0400 (0:00:00.534) 0:03:04.076 ********* >2018-08-20 06:22:22,116 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:22,117 p=1013 u=mistral | skipping: [compute-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:22,138 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/mysql) => {"changed": false, "item": "/var/log/containers/mysql", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:22,140 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/mysql) => {"changed": false, "item": "/var/lib/mysql", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:22,261 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/mysql) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/mysql", "mode": "0755", "owner": "root", "path": "/var/log/containers/mysql", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:22,426 p=1013 u=mistral | ok: [controller-0] => (item=/var/lib/mysql) => {"changed": false, "gid": 27, "group": "mysql", "item": "/var/lib/mysql", "mode": "0755", "owner": "mysql", "path": "/var/lib/mysql", "secontext": "system_u:object_r:mysqld_db_t:s0", "size": 6, "state": "directory", "uid": 27} >2018-08-20 06:22:22,452 p=1013 u=mistral | TASK [mysql logs readme] ******************************************************* >2018-08-20 06:22:22,452 p=1013 u=mistral | Monday 20 August 2018 06:22:22 -0400 (0:00:00.405) 0:03:04.482 ********* >2018-08-20 06:22:22,511 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:22,527 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:22,948 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "de8fb5fe96200ab286121f8a09419702bd693743", "dest": "/var/log/mariadb/readme.txt", "gid": 0, "group": "root", "md5sum": "1f3e80eed7060dfe5ee49c8063244c53", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:mysqld_log_t:s0", "size": 78, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760542.5-51561665144895/source", "state": "file", "uid": 0} >2018-08-20 06:22:22,972 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:22,972 p=1013 u=mistral | Monday 20 August 2018 06:22:22 -0400 (0:00:00.520) 0:03:05.002 ********* >2018-08-20 06:22:23,029 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:23,032 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:23,047 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:23,053 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": false, "item": "/var/log/containers/httpd/neutron-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:23,174 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/neutron) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:23,336 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/neutron-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/neutron-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/neutron-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:23,361 p=1013 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-08-20 06:22:23,361 p=1013 u=mistral | Monday 20 August 2018 06:22:23 -0400 (0:00:00.388) 0:03:05.391 ********* >2018-08-20 06:22:23,418 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:23,431 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:23,838 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-08-20 06:22:23,839 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:23,864 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:23,864 p=1013 u=mistral | Monday 20 August 2018 06:22:23 -0400 (0:00:00.503) 0:03:05.894 ********* >2018-08-20 06:22:23,926 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:23,945 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:24,073 p=1013 u=mistral | ok: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:24,101 p=1013 u=mistral | TASK [create /var/lib/neutron] ************************************************* >2018-08-20 06:22:24,101 p=1013 u=mistral | Monday 20 August 2018 06:22:24 -0400 (0:00:00.236) 0:03:06.131 ********* >2018-08-20 06:22:24,156 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:24,171 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:24,301 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/neutron", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:24,325 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:24,325 p=1013 u=mistral | Monday 20 August 2018 06:22:24 -0400 (0:00:00.223) 0:03:06.355 ********* >2018-08-20 06:22:24,382 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:24,384 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:24,397 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:24,412 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": false, "item": "/var/log/containers/httpd/nova-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:24,531 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/nova) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:24,699 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/nova-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:24,728 p=1013 u=mistral | TASK [nova logs readme] ******************************************************** >2018-08-20 06:22:24,728 p=1013 u=mistral | Monday 20 August 2018 06:22:24 -0400 (0:00:00.402) 0:03:06.758 ********* >2018-08-20 06:22:24,793 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:24,807 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:25,226 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-08-20 06:22:25,226 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:25,252 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:25,252 p=1013 u=mistral | Monday 20 August 2018 06:22:25 -0400 (0:00:00.524) 0:03:07.282 ********* >2018-08-20 06:22:25,337 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:25,353 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:25,473 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:25,500 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:25,500 p=1013 u=mistral | Monday 20 August 2018 06:22:25 -0400 (0:00:00.248) 0:03:07.530 ********* >2018-08-20 06:22:25,574 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:25,575 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:25,578 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/nova) => {"changed": false, "item": "/var/log/containers/nova", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:25,586 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": false, "item": "/var/log/containers/httpd/nova-placement", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:25,723 p=1013 u=mistral | ok: [controller-0] => (item=/var/log/containers/nova) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers/nova", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:25,874 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/nova-placement) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/nova-placement", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/nova-placement", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:25,905 p=1013 u=mistral | TASK [NTP settings] ************************************************************ >2018-08-20 06:22:25,905 p=1013 u=mistral | Monday 20 August 2018 06:22:25 -0400 (0:00:00.404) 0:03:07.935 ********* >2018-08-20 06:22:25,974 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["clock.redhat.com"]}, "changed": false} >2018-08-20 06:22:25,975 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:25,991 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:26,016 p=1013 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-08-20 06:22:26,016 p=1013 u=mistral | Monday 20 August 2018 06:22:26 -0400 (0:00:00.111) 0:03:08.046 ********* >2018-08-20 06:22:26,047 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:26,074 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:26,086 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:26,110 p=1013 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-08-20 06:22:26,110 p=1013 u=mistral | Monday 20 August 2018 06:22:26 -0400 (0:00:00.093) 0:03:08.140 ********* >2018-08-20 06:22:26,171 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:26,191 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:33,612 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": ["ntpdate", "-u", "clock.redhat.com"], "delta": "0:00:07.309018", "end": "2018-08-20 06:22:33.595040", "rc": 0, "start": "2018-08-20 06:22:26.286022", "stderr": "", "stderr_lines": [], "stdout": "20 Aug 06:22:33 ntpdate[16350]: adjust time server 10.11.160.238 offset -0.000289 sec", "stdout_lines": ["20 Aug 06:22:33 ntpdate[16350]: adjust time server 10.11.160.238 offset -0.000289 sec"]} >2018-08-20 06:22:33,637 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:33,637 p=1013 u=mistral | Monday 20 August 2018 06:22:33 -0400 (0:00:07.526) 0:03:15.667 ********* >2018-08-20 06:22:33,722 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:33,724 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:33,746 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/panko) => {"changed": false, "item": "/var/log/containers/panko", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:33,759 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": false, "item": "/var/log/containers/httpd/panko-api", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:33,862 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/panko) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/panko", "mode": "0755", "owner": "root", "path": "/var/log/containers/panko", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:34,018 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/httpd/panko-api) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/httpd/panko-api", "mode": "0755", "owner": "root", "path": "/var/log/containers/httpd/panko-api", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:34,044 p=1013 u=mistral | TASK [panko logs readme] ******************************************************* >2018-08-20 06:22:34,044 p=1013 u=mistral | Monday 20 August 2018 06:22:34 -0400 (0:00:00.406) 0:03:16.074 ********* >2018-08-20 06:22:34,098 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:34,115 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:34,543 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "903397bbd82e9b1f53087e3d7e8975d851857ce2", "msg": "Destination directory /var/log/panko does not exist"} >2018-08-20 06:22:34,543 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:34,567 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:22:34,567 p=1013 u=mistral | Monday 20 August 2018 06:22:34 -0400 (0:00:00.522) 0:03:16.597 ********* >2018-08-20 06:22:34,635 p=1013 u=mistral | skipping: [compute-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:34,636 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:34,644 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/rabbitmq) => {"changed": false, "item": "/var/lib/rabbitmq", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:34,649 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/rabbitmq) => {"changed": false, "item": "/var/log/containers/rabbitmq", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:34,794 p=1013 u=mistral | changed: [controller-0] => (item=/var/lib/rabbitmq) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/lib/rabbitmq", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:34,952 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/rabbitmq) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/rabbitmq", "mode": "0755", "owner": "root", "path": "/var/log/containers/rabbitmq", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:34,977 p=1013 u=mistral | TASK [rabbitmq logs readme] **************************************************** >2018-08-20 06:22:34,977 p=1013 u=mistral | Monday 20 August 2018 06:22:34 -0400 (0:00:00.409) 0:03:17.007 ********* >2018-08-20 06:22:35,029 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,046 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,443 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "ee241f2199f264c9d0f384cf389fe255e8bf8a77", "msg": "Destination directory /var/log/rabbitmq does not exist"} >2018-08-20 06:22:35,444 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:35,468 p=1013 u=mistral | TASK [stop the Erlang port mapper on the host and make sure it cannot bind to the port used by container] *** >2018-08-20 06:22:35,469 p=1013 u=mistral | Monday 20 August 2018 06:22:35 -0400 (0:00:00.491) 0:03:17.499 ********* >2018-08-20 06:22:35,533 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,551 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,700 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "cmd": "echo 'export ERL_EPMD_ADDRESS=127.0.0.1' > /etc/rabbitmq/rabbitmq-env.conf\n echo 'export ERL_EPMD_PORT=4370' >> /etc/rabbitmq/rabbitmq-env.conf\n for pid in $(pgrep epmd --ns 1 --nslist pid); do kill $pid; done", "delta": "0:00:00.040288", "end": "2018-08-20 06:22:35.682361", "rc": 0, "start": "2018-08-20 06:22:35.642073", "stderr": "/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory\n/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "stderr_lines": ["/bin/sh: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory", "/bin/sh: line 1: /etc/rabbitmq/rabbitmq-env.conf: No such file or directory"], "stdout": "", "stdout_lines": []} >2018-08-20 06:22:35,726 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:22:35,726 p=1013 u=mistral | Monday 20 August 2018 06:22:35 -0400 (0:00:00.257) 0:03:17.756 ********* >2018-08-20 06:22:35,800 p=1013 u=mistral | skipping: [compute-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,801 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,802 p=1013 u=mistral | skipping: [compute-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,820 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/redis) => {"changed": false, "item": "/var/lib/redis", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,822 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/redis) => {"changed": false, "item": "/var/log/containers/redis", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,827 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/run/redis) => {"changed": false, "item": "/var/run/redis", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:35,945 p=1013 u=mistral | ok: [controller-0] => (item=/var/lib/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/lib/redis", "mode": "0750", "owner": "redis", "path": "/var/lib/redis", "secontext": "system_u:object_r:redis_var_lib_t:s0", "size": 6, "state": "directory", "uid": 992} >2018-08-20 06:22:36,101 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/containers/redis) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/redis", "mode": "0755", "owner": "root", "path": "/var/log/containers/redis", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:36,267 p=1013 u=mistral | ok: [controller-0] => (item=/var/run/redis) => {"changed": false, "gid": 988, "group": "redis", "item": "/var/run/redis", "mode": "0755", "owner": "redis", "path": "/var/run/redis", "secontext": "system_u:object_r:redis_var_run_t:s0", "size": 40, "state": "directory", "uid": 992} >2018-08-20 06:22:36,294 p=1013 u=mistral | TASK [redis logs readme] ******************************************************* >2018-08-20 06:22:36,294 p=1013 u=mistral | Monday 20 August 2018 06:22:36 -0400 (0:00:00.567) 0:03:18.324 ********* >2018-08-20 06:22:36,354 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:36,367 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:36,837 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "42d03af8abf93e87fdb3fc69702638fc81d943fb", "dest": "/var/log/redis/readme.txt", "gid": 0, "group": "root", "md5sum": "26fc3dbfb40d3414a608e987cc577748", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:redis_log_t:s0", "size": 78, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760556.34-75041659022796/source", "state": "file", "uid": 0} >2018-08-20 06:22:36,860 p=1013 u=mistral | TASK [create /var/lib/sahara] ************************************************** >2018-08-20 06:22:36,860 p=1013 u=mistral | Monday 20 August 2018 06:22:36 -0400 (0:00:00.565) 0:03:18.890 ********* >2018-08-20 06:22:36,915 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:36,927 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:37,059 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/sahara", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:37,082 p=1013 u=mistral | TASK [create persistent sahara logs directory] ********************************* >2018-08-20 06:22:37,082 p=1013 u=mistral | Monday 20 August 2018 06:22:37 -0400 (0:00:00.222) 0:03:19.112 ********* >2018-08-20 06:22:37,135 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:37,146 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:37,277 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/sahara", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:37,303 p=1013 u=mistral | TASK [sahara logs readme] ****************************************************** >2018-08-20 06:22:37,303 p=1013 u=mistral | Monday 20 August 2018 06:22:37 -0400 (0:00:00.221) 0:03:19.333 ********* >2018-08-20 06:22:37,391 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:37,408 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:37,808 p=1013 u=mistral | fatal: [controller-0]: FAILED! => {"changed": false, "checksum": "b0212a1177fa4a88502d17a1cbc31198040cf047", "msg": "Destination directory /var/log/sahara does not exist"} >2018-08-20 06:22:37,808 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:37,833 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:22:37,833 p=1013 u=mistral | Monday 20 August 2018 06:22:37 -0400 (0:00:00.529) 0:03:19.863 ********* >2018-08-20 06:22:37,896 p=1013 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:37,898 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:37,930 p=1013 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:37,931 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:38,067 p=1013 u=mistral | changed: [controller-0] => (item=/srv/node) => {"changed": true, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:38,221 p=1013 u=mistral | changed: [controller-0] => (item=/var/log/swift) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:38,249 p=1013 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-08-20 06:22:38,249 p=1013 u=mistral | Monday 20 August 2018 06:22:38 -0400 (0:00:00.415) 0:03:20.279 ********* >2018-08-20 06:22:38,314 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:38,325 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:38,446 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "dest": "/var/log/containers/swift", "gid": 0, "group": "root", "mode": "0777", "owner": "root", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 14, "src": "/var/log/swift", "state": "link", "uid": 0} >2018-08-20 06:22:38,476 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:22:38,477 p=1013 u=mistral | Monday 20 August 2018 06:22:38 -0400 (0:00:00.227) 0:03:20.507 ********* >2018-08-20 06:22:38,563 p=1013 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:38,571 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:38,572 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:38,611 p=1013 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:38,612 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:38,613 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:38,733 p=1013 u=mistral | ok: [controller-0] => (item=/srv/node) => {"changed": false, "gid": 0, "group": "root", "item": "/srv/node", "mode": "0755", "owner": "root", "path": "/srv/node", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:38,899 p=1013 u=mistral | ok: [controller-0] => (item=/var/log/swift) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/swift", "mode": "0755", "owner": "root", "path": "/var/log/swift", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:39,063 p=1013 u=mistral | ok: [controller-0] => (item=/var/log/containers) => {"changed": false, "gid": 0, "group": "root", "item": "/var/log/containers", "mode": "0755", "owner": "root", "path": "/var/log/containers", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 244, "state": "directory", "uid": 0} >2018-08-20 06:22:39,090 p=1013 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-08-20 06:22:39,090 p=1013 u=mistral | Monday 20 August 2018 06:22:39 -0400 (0:00:00.613) 0:03:21.120 ********* >2018-08-20 06:22:39,148 p=1013 u=mistral | ok: [controller-0] => {"ansible_facts": {"swift_use_local_disks": true}, "changed": false} >2018-08-20 06:22:39,149 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:39,161 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:39,186 p=1013 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-08-20 06:22:39,186 p=1013 u=mistral | Monday 20 August 2018 06:22:39 -0400 (0:00:00.095) 0:03:21.216 ********* >2018-08-20 06:22:39,244 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:39,263 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:39,391 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/srv/node/d1", "secontext": "unconfined_u:object_r:var_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:39,416 p=1013 u=mistral | TASK [swift logs readme] ******************************************************* >2018-08-20 06:22:39,416 p=1013 u=mistral | Monday 20 August 2018 06:22:39 -0400 (0:00:00.230) 0:03:21.446 ********* >2018-08-20 06:22:39,471 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:39,483 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:39,913 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "42510a6de124722d6efbc2b1bb038bfe97e5b6d3", "dest": "/var/log/swift/readme.txt", "gid": 0, "group": "root", "md5sum": "23163287d564762945ee1738f049dc10", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:var_log_t:s0", "size": 116, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760559.46-75318475393451/source", "state": "file", "uid": 0} >2018-08-20 06:22:39,936 p=1013 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-08-20 06:22:39,936 p=1013 u=mistral | Monday 20 August 2018 06:22:39 -0400 (0:00:00.519) 0:03:21.966 ********* >2018-08-20 06:22:40,015 p=1013 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-08-20 06:22:40,015 p=1013 u=mistral | Monday 20 August 2018 06:22:40 -0400 (0:00:00.079) 0:03:22.045 ********* >2018-08-20 06:22:40,102 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:40,103 p=1013 u=mistral | Monday 20 August 2018 06:22:40 -0400 (0:00:00.087) 0:03:22.133 ********* >2018-08-20 06:22:40,128 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:40,167 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:40,316 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/ceilometer", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:40,342 p=1013 u=mistral | TASK [ceilometer logs readme] ************************************************** >2018-08-20 06:22:40,342 p=1013 u=mistral | Monday 20 August 2018 06:22:40 -0400 (0:00:00.239) 0:03:22.372 ********* >2018-08-20 06:22:40,370 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:40,410 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:40,841 p=1013 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "ddd9b447be4ffb7bbfc2fa4cf7f104a4e7b2a6f3", "msg": "Destination directory /var/log/ceilometer does not exist"} >2018-08-20 06:22:40,841 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:40,866 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:40,867 p=1013 u=mistral | Monday 20 August 2018 06:22:40 -0400 (0:00:00.524) 0:03:22.897 ********* >2018-08-20 06:22:40,897 p=1013 u=mistral | skipping: [controller-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:40,950 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/neutron) => {"changed": false, "item": "/var/log/containers/neutron", "skip_reason": "Conditional result was False"} >2018-08-20 06:22:41,099 p=1013 u=mistral | changed: [compute-0] => (item=/var/log/containers/neutron) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/neutron", "mode": "0755", "owner": "root", "path": "/var/log/containers/neutron", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:41,126 p=1013 u=mistral | TASK [neutron logs readme] ***************************************************** >2018-08-20 06:22:41,126 p=1013 u=mistral | Monday 20 August 2018 06:22:41 -0400 (0:00:00.259) 0:03:23.156 ********* >2018-08-20 06:22:41,156 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:41,199 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:41,640 p=1013 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "f5a95f434a4aad25a9a81a045dec39159a6e8864", "msg": "Destination directory /var/log/neutron does not exist"} >2018-08-20 06:22:41,640 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:41,667 p=1013 u=mistral | TASK [set_fact] **************************************************************** >2018-08-20 06:22:41,667 p=1013 u=mistral | Monday 20 August 2018 06:22:41 -0400 (0:00:00.540) 0:03:23.697 ********* >2018-08-20 06:22:41,696 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:41,739 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >2018-08-20 06:22:41,743 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:41,774 p=1013 u=mistral | TASK [include_role] ************************************************************ >2018-08-20 06:22:41,774 p=1013 u=mistral | Monday 20 August 2018 06:22:41 -0400 (0:00:00.106) 0:03:23.804 ********* >2018-08-20 06:22:41,803 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:41,847 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:41,889 p=1013 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-08-20 06:22:41,889 p=1013 u=mistral | Monday 20 August 2018 06:22:41 -0400 (0:00:00.115) 0:03:23.919 ********* >2018-08-20 06:22:42,153 p=1013 u=mistral | changed: [compute-0] => {"changed": true} >2018-08-20 06:22:42,174 p=1013 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-08-20 06:22:42,174 p=1013 u=mistral | Monday 20 August 2018 06:22:42 -0400 (0:00:00.284) 0:03:24.204 ********* >2018-08-20 06:22:42,817 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-74.git6e3bb8e.el7.x86_64 providing docker is already installed"]} >2018-08-20 06:22:42,841 p=1013 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-08-20 06:22:42,841 p=1013 u=mistral | Monday 20 August 2018 06:22:42 -0400 (0:00:00.667) 0:03:24.871 ********* >2018-08-20 06:22:43,106 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:43,124 p=1013 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-08-20 06:22:43,124 p=1013 u=mistral | Monday 20 August 2018 06:22:43 -0400 (0:00:00.283) 0:03:25.154 ********* >2018-08-20 06:22:43,375 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-08-20 06:22:43,394 p=1013 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-08-20 06:22:43,394 p=1013 u=mistral | Monday 20 August 2018 06:22:43 -0400 (0:00:00.270) 0:03:25.424 ********* >2018-08-20 06:22:43,640 p=1013 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-08-20 06:22:43,660 p=1013 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-08-20 06:22:43,660 p=1013 u=mistral | Monday 20 August 2018 06:22:43 -0400 (0:00:00.265) 0:03:25.690 ********* >2018-08-20 06:22:43,895 p=1013 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-08-20 06:22:43,914 p=1013 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-08-20 06:22:43,914 p=1013 u=mistral | Monday 20 August 2018 06:22:43 -0400 (0:00:00.254) 0:03:25.944 ********* >2018-08-20 06:22:44,141 p=1013 u=mistral | changed: [compute-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:44,187 p=1013 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-08-20 06:22:44,187 p=1013 u=mistral | Monday 20 August 2018 06:22:44 -0400 (0:00:00.272) 0:03:26.217 ********* >2018-08-20 06:22:44,766 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760564.23-101568897433589/source", "state": "file", "uid": 0} >2018-08-20 06:22:44,787 p=1013 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-08-20 06:22:44,787 p=1013 u=mistral | Monday 20 August 2018 06:22:44 -0400 (0:00:00.599) 0:03:26.817 ********* >2018-08-20 06:22:45,036 p=1013 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-08-20 06:22:45,055 p=1013 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-08-20 06:22:45,055 p=1013 u=mistral | Monday 20 August 2018 06:22:45 -0400 (0:00:00.268) 0:03:27.085 ********* >2018-08-20 06:22:45,293 p=1013 u=mistral | changed: [compute-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-08-20 06:22:45,313 p=1013 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-08-20 06:22:45,313 p=1013 u=mistral | Monday 20 August 2018 06:22:45 -0400 (0:00:00.257) 0:03:27.343 ********* >2018-08-20 06:22:45,539 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-08-20 06:22:45,559 p=1013 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-08-20 06:22:45,560 p=1013 u=mistral | Monday 20 August 2018 06:22:45 -0400 (0:00:00.247) 0:03:27.590 ********* >2018-08-20 06:22:45,583 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:45,584 p=1013 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-08-20 06:22:45,585 p=1013 u=mistral | Monday 20 August 2018 06:22:45 -0400 (0:00:00.024) 0:03:27.614 ********* >2018-08-20 06:22:45,847 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002562", "end": "2018-08-20 06:22:45.798568", "rc": 0, "start": "2018-08-20 06:22:45.796006", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} >2018-08-20 06:22:45,848 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >2018-08-20 06:22:45,848 p=1013 u=mistral | Monday 20 August 2018 06:22:45 -0400 (0:00:00.263) 0:03:27.878 ********* >2018-08-20 06:22:46,152 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "name": null, "status": {}} >2018-08-20 06:22:46,152 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >2018-08-20 06:22:46,152 p=1013 u=mistral | Monday 20 August 2018 06:22:46 -0400 (0:00:00.304) 0:03:28.182 ********* >2018-08-20 06:22:47,675 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "systemd-journald.socket network.target registries.service basic.target docker-storage-setup.service system.slice rhel-push-plugin.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target registries.service rhel-push-plugin.socket docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-08-20 06:22:47,677 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >2018-08-20 06:22:47,677 p=1013 u=mistral | Monday 20 August 2018 06:22:47 -0400 (0:00:01.524) 0:03:29.707 ********* >2018-08-20 06:22:47,741 p=1013 u=mistral | Pausing for 10 seconds >2018-08-20 06:22:47,741 p=1013 u=mistral | (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >2018-08-20 06:22:47,741 p=1013 u=mistral | [container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >2018-08-20 06:22:57,744 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-08-20 06:22:47.741069", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-08-20 06:22:57.741238", "user_input": ""} >2018-08-20 06:22:57,745 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >2018-08-20 06:22:57,746 p=1013 u=mistral | Monday 20 August 2018 06:22:57 -0400 (0:00:10.068) 0:03:39.775 ********* >2018-08-20 06:22:58,009 p=1013 u=mistral | changed: [compute-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.037586", "end": "2018-08-20 06:22:57.981325", "rc": 0, "start": "2018-08-20 06:22:57.943739", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} >2018-08-20 06:22:58,031 p=1013 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-08-20 06:22:58,031 p=1013 u=mistral | Monday 20 August 2018 06:22:58 -0400 (0:00:00.285) 0:03:40.061 ********* >2018-08-20 06:22:58,335 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Mon 2018-08-20 06:22:47 EDT", "ActiveEnterTimestampMonotonic": "456128322", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-journald.socket network.target registries.service basic.target docker-storage-setup.service system.slice rhel-push-plugin.socket", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-08-20 06:22:46 EDT", "AssertTimestampMonotonic": "454965748", "Before": "shutdown.target paunch-container-shutdown.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-08-20 06:22:46 EDT", "ConditionTimestampMonotonic": "454965748", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "14443", "ExecMainStartTimestamp": "Mon 2018-08-20 06:22:46 EDT", "ExecMainStartTimestampMonotonic": "454967107", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Mon 2018-08-20 06:22:46 EDT] ; stop_time=[n/a] ; pid=14443 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-08-20 06:22:46 EDT", "InactiveExitTimestampMonotonic": "454967141", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "14443", "MemoryAccounting": "no", "MemoryCurrent": "67219456", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target registries.service rhel-push-plugin.socket docker-cleanup.timer", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "20", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Mon 2018-08-20 06:22:47 EDT", "WatchdogTimestampMonotonic": "456128270", "WatchdogUSec": "0"}} >2018-08-20 06:22:58,360 p=1013 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-08-20 06:22:58,360 p=1013 u=mistral | Monday 20 August 2018 06:22:58 -0400 (0:00:00.328) 0:03:40.390 ********* >2018-08-20 06:22:58,392 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:58,446 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:58,602 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"atime": 1534760566.0958154, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "424de87cd6ae66547b285288742255731a46ab83", "ctime": 1534520507.4778163, "dev": 64514, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 4554196, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1513292517.0, "nlink": 1, "path": "/lib/systemd/system/iscsid.socket", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 175, "uid": 0, "version": "1650827887", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} >2018-08-20 06:22:58,629 p=1013 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-08-20 06:22:58,629 p=1013 u=mistral | Monday 20 August 2018 06:22:58 -0400 (0:00:00.268) 0:03:40.659 ********* >2018-08-20 06:22:58,661 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:58,729 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:58,996 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "enabled": false, "name": "iscsid.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Mon 2018-08-20 06:15:14 EDT", "ActiveEnterTimestampMonotonic": "2855283", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "-.slice sysinit.target", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-08-20 06:15:14 EDT", "AssertTimestampMonotonic": "2854923", "Backlog": "128", "Before": "sockets.target shutdown.target iscsid.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-08-20 06:15:14 EDT", "ConditionTimestampMonotonic": "2854923", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Open-iSCSI iscsid Socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "Documentation": "man:iscsid(8) man:iscsiadm(8)", "FragmentPath": "/usr/lib/systemd/system/iscsid.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "iscsid.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-08-20 06:15:14 EDT", "InactiveExitTimestampMonotonic": "2855283", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22973", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "ListenStream": "@ISCSIADM_ABSTRACT_NAMESPACE", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "iscsid.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "Requires": "sysinit.target", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "iscsid.service", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "enabled", "WantedBy": "sockets.target", "Wants": "-.slice"}} >2018-08-20 06:22:59,022 p=1013 u=mistral | TASK [create persistent logs directory] **************************************** >2018-08-20 06:22:59,022 p=1013 u=mistral | Monday 20 August 2018 06:22:59 -0400 (0:00:00.393) 0:03:41.052 ********* >2018-08-20 06:22:59,052 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:59,100 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:59,301 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/containers/nova", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:22:59,327 p=1013 u=mistral | TASK [nova logs readme] ******************************************************** >2018-08-20 06:22:59,327 p=1013 u=mistral | Monday 20 August 2018 06:22:59 -0400 (0:00:00.304) 0:03:41.357 ********* >2018-08-20 06:22:59,360 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:59,411 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:59,900 p=1013 u=mistral | fatal: [compute-0]: FAILED! => {"changed": false, "checksum": "c2216cc4edf5d3ce90f10748c3243db4e1842a85", "msg": "Destination directory /var/log/nova does not exist"} >2018-08-20 06:22:59,900 p=1013 u=mistral | ...ignoring >2018-08-20 06:22:59,924 p=1013 u=mistral | TASK [Mount Nova NFS Share] **************************************************** >2018-08-20 06:22:59,925 p=1013 u=mistral | Monday 20 August 2018 06:22:59 -0400 (0:00:00.597) 0:03:41.955 ********* >2018-08-20 06:22:59,955 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:59,983 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:22:59,996 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:00,019 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:23:00,019 p=1013 u=mistral | Monday 20 August 2018 06:23:00 -0400 (0:00:00.094) 0:03:42.049 ********* >2018-08-20 06:23:00,075 p=1013 u=mistral | skipping: [controller-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:00,076 p=1013 u=mistral | skipping: [controller-0] => (item=/var/lib/nova/instances) => {"changed": false, "item": "/var/lib/nova/instances", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:00,077 p=1013 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:00,106 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova) => {"changed": false, "item": "/var/lib/nova", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:00,106 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/nova/instances) => {"changed": false, "item": "/var/lib/nova/instances", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:00,107 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:00,305 p=1013 u=mistral | changed: [compute-0] => (item=/var/lib/nova) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/nova", "mode": "0755", "owner": "root", "path": "/var/lib/nova", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:00,468 p=1013 u=mistral | changed: [compute-0] => (item=/var/lib/nova/instances) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/nova/instances", "mode": "0755", "owner": "root", "path": "/var/lib/nova/instances", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:00,645 p=1013 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-08-20 06:23:00,710 p=1013 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-08-20 06:23:00,711 p=1013 u=mistral | Monday 20 August 2018 06:23:00 -0400 (0:00:00.691) 0:03:42.740 ********* >2018-08-20 06:23:00,770 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:00,796 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:00,964 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/ceph", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:00,988 p=1013 u=mistral | TASK [is Instance HA enabled] ************************************************** >2018-08-20 06:23:00,988 p=1013 u=mistral | Monday 20 August 2018 06:23:00 -0400 (0:00:00.277) 0:03:43.018 ********* >2018-08-20 06:23:01,045 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,060 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,063 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"instance_ha_enabled": false}, "changed": false} >2018-08-20 06:23:01,086 p=1013 u=mistral | TASK [prepare Instance HA script directory] ************************************ >2018-08-20 06:23:01,086 p=1013 u=mistral | Monday 20 August 2018 06:23:01 -0400 (0:00:00.098) 0:03:43.116 ********* >2018-08-20 06:23:01,117 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,143 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,156 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,183 p=1013 u=mistral | TASK [install Instance HA script that runs nova-compute] *********************** >2018-08-20 06:23:01,184 p=1013 u=mistral | Monday 20 August 2018 06:23:01 -0400 (0:00:00.097) 0:03:43.214 ********* >2018-08-20 06:23:01,212 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,240 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,254 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,279 p=1013 u=mistral | TASK [Get list of instance HA compute nodes] *********************************** >2018-08-20 06:23:01,279 p=1013 u=mistral | Monday 20 August 2018 06:23:01 -0400 (0:00:00.095) 0:03:43.309 ********* >2018-08-20 06:23:01,309 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,336 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,348 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,371 p=1013 u=mistral | TASK [If instance HA is enabled on the node activate the evacuation completed check] *** >2018-08-20 06:23:01,371 p=1013 u=mistral | Monday 20 August 2018 06:23:01 -0400 (0:00:00.092) 0:03:43.401 ********* >2018-08-20 06:23:01,400 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,428 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,440 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,465 p=1013 u=mistral | TASK [create libvirt persistent data directories] ****************************** >2018-08-20 06:23:01,465 p=1013 u=mistral | Monday 20 August 2018 06:23:01 -0400 (0:00:00.094) 0:03:43.495 ********* >2018-08-20 06:23:01,525 p=1013 u=mistral | skipping: [controller-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,527 p=1013 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,529 p=1013 u=mistral | skipping: [controller-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,533 p=1013 u=mistral | skipping: [controller-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,534 p=1013 u=mistral | skipping: [controller-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,545 p=1013 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt) => {"changed": false, "item": "/etc/libvirt", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,552 p=1013 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/secrets) => {"changed": false, "item": "/etc/libvirt/secrets", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,558 p=1013 u=mistral | skipping: [ceph-0] => (item=/etc/libvirt/qemu) => {"changed": false, "item": "/etc/libvirt/qemu", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,575 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/libvirt) => {"changed": false, "item": "/var/lib/libvirt", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,577 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/libvirt) => {"changed": false, "item": "/var/log/containers/libvirt", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:01,697 p=1013 u=mistral | ok: [compute-0] => (item=/etc/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt", "mode": "0700", "owner": "root", "path": "/etc/libvirt", "secontext": "system_u:object_r:virt_etc_t:s0", "size": 215, "state": "directory", "uid": 0} >2018-08-20 06:23:01,859 p=1013 u=mistral | ok: [compute-0] => (item=/etc/libvirt/secrets) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/secrets", "mode": "0700", "owner": "root", "path": "/etc/libvirt/secrets", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:02,024 p=1013 u=mistral | ok: [compute-0] => (item=/etc/libvirt/qemu) => {"changed": false, "gid": 0, "group": "root", "item": "/etc/libvirt/qemu", "mode": "0700", "owner": "root", "path": "/etc/libvirt/qemu", "secontext": "system_u:object_r:virt_etc_rw_t:s0", "size": 22, "state": "directory", "uid": 0} >2018-08-20 06:23:02,170 p=1013 u=mistral | ok: [compute-0] => (item=/var/lib/libvirt) => {"changed": false, "gid": 0, "group": "root", "item": "/var/lib/libvirt", "mode": "0755", "owner": "root", "path": "/var/lib/libvirt", "secontext": "system_u:object_r:virt_var_lib_t:s0", "size": 104, "state": "directory", "uid": 0} >2018-08-20 06:23:02,316 p=1013 u=mistral | changed: [compute-0] => (item=/var/log/containers/libvirt) => {"changed": true, "gid": 0, "group": "root", "item": "/var/log/containers/libvirt", "mode": "0755", "owner": "root", "path": "/var/log/containers/libvirt", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:02,342 p=1013 u=mistral | TASK [ensure qemu group is present on the host] ******************************** >2018-08-20 06:23:02,343 p=1013 u=mistral | Monday 20 August 2018 06:23:02 -0400 (0:00:00.877) 0:03:44.372 ********* >2018-08-20 06:23:02,372 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:02,421 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:02,551 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "gid": 107, "name": "qemu", "state": "present", "system": false} >2018-08-20 06:23:02,578 p=1013 u=mistral | TASK [ensure qemu user is present on the host] ********************************* >2018-08-20 06:23:02,578 p=1013 u=mistral | Monday 20 August 2018 06:23:02 -0400 (0:00:00.235) 0:03:44.608 ********* >2018-08-20 06:23:02,612 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:02,658 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:02,975 p=1013 u=mistral | ok: [compute-0] => {"append": false, "changed": false, "comment": "qemu user", "group": 107, "home": "/", "move_home": false, "name": "qemu", "shell": "/sbin/nologin", "state": "present", "uid": 107} >2018-08-20 06:23:02,999 p=1013 u=mistral | TASK [create directory for vhost-user sockets with qemu ownership] ************* >2018-08-20 06:23:03,000 p=1013 u=mistral | Monday 20 August 2018 06:23:02 -0400 (0:00:00.421) 0:03:45.029 ********* >2018-08-20 06:23:03,029 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:03,070 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:03,212 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 107, "group": "qemu", "mode": "0755", "owner": "qemu", "path": "/var/lib/vhost_sockets", "secontext": "system_u:object_r:virt_cache_t:s0", "size": 6, "state": "directory", "uid": 107} >2018-08-20 06:23:03,239 p=1013 u=mistral | TASK [check if libvirt is installed] ******************************************* >2018-08-20 06:23:03,239 p=1013 u=mistral | Monday 20 August 2018 06:23:03 -0400 (0:00:00.239) 0:03:45.269 ********* >2018-08-20 06:23:03,269 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:03,314 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:03,482 p=1013 u=mistral | [WARNING]: Consider using the yum, dnf or zypper module rather than running >rpm. If you need to use command because yum, dnf or zypper is insufficient you >can add warn=False to this command task or set command_warnings=False in >ansible.cfg to get rid of this message. > >2018-08-20 06:23:03,482 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["/usr/bin/rpm", "-q", "libvirt-daemon"], "delta": "0:00:00.035371", "end": "2018-08-20 06:23:03.461695", "failed_when_result": false, "rc": 0, "start": "2018-08-20 06:23:03.426324", "stderr": "", "stderr_lines": [], "stdout": "libvirt-daemon-3.9.0-14.el7_5.7.x86_64", "stdout_lines": ["libvirt-daemon-3.9.0-14.el7_5.7.x86_64"]} >2018-08-20 06:23:03,508 p=1013 u=mistral | TASK [make sure libvirt services are disabled] ********************************* >2018-08-20 06:23:03,509 p=1013 u=mistral | Monday 20 August 2018 06:23:03 -0400 (0:00:00.269) 0:03:45.539 ********* >2018-08-20 06:23:03,540 p=1013 u=mistral | skipping: [controller-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:03,569 p=1013 u=mistral | skipping: [controller-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:03,592 p=1013 u=mistral | skipping: [ceph-0] => (item=libvirtd.service) => {"changed": false, "item": "libvirtd.service", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:03,593 p=1013 u=mistral | skipping: [ceph-0] => (item=virtlogd.socket) => {"changed": false, "item": "virtlogd.socket", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:03,853 p=1013 u=mistral | changed: [compute-0] => (item=libvirtd.service) => {"changed": true, "enabled": false, "item": "libvirtd.service", "name": "libvirtd.service", "state": "stopped", "status": {"ActiveEnterTimestamp": "Mon 2018-08-20 06:15:15 EDT", "ActiveEnterTimestampMonotonic": "4476381", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "virtlogd.socket local-fs.target virtlockd.service system.slice apparmor.service remote-fs.target virtlockd.socket network.target iscsid.service virtlogd.service systemd-journald.socket basic.target dbus.service", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-08-20 06:15:15 EDT", "AssertTimestampMonotonic": "4313449", "Before": "shutdown.target multi-user.target libvirt-guests.service", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-08-20 06:15:15 EDT", "ConditionTimestampMonotonic": "4313449", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/libvirtd.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Virtualization daemon", "DevicePolicy": "auto", "Documentation": "man:libvirtd(8) https://libvirt.org", "EnvironmentFile": "/etc/sysconfig/libvirtd (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "1171", "ExecMainStartTimestamp": "Mon 2018-08-20 06:15:15 EDT", "ExecMainStartTimestampMonotonic": "4314592", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/libvirtd ; argv[]=/usr/sbin/libvirtd $LIBVIRTD_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/libvirtd.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "libvirtd.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-08-20 06:15:15 EDT", "InactiveExitTimestampMonotonic": "4314627", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "8192", "LimitNPROC": "22973", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "1171", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "libvirtd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "Requires": "virtlogd.socket basic.target virtlockd.socket", "Restart": "on-failure", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "32768", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "WantedBy": "multi-user.target libvirt-guests.service", "Wants": "system.slice", "WatchdogTimestamp": "Mon 2018-08-20 06:15:15 EDT", "WatchdogTimestampMonotonic": "4476338", "WatchdogUSec": "0"}} >2018-08-20 06:23:04,023 p=1013 u=mistral | changed: [compute-0] => (item=virtlogd.socket) => {"changed": true, "enabled": false, "item": "virtlogd.socket", "name": "virtlogd.socket", "state": "stopped", "status": {"Accept": "no", "ActiveEnterTimestamp": "Mon 2018-08-20 06:15:14 EDT", "ActiveEnterTimestampMonotonic": "2856223", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target -.slice -.mount", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-08-20 06:15:14 EDT", "AssertTimestampMonotonic": "2855432", "Backlog": "128", "Before": "virtlogd.service sockets.target shutdown.target libvirtd.service", "BindIPv6Only": "default", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "Broadcast": "no", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-08-20 06:15:14 EDT", "ConditionTimestampMonotonic": "2855432", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "DeferAcceptUSec": "0", "Delegate": "no", "Description": "Virtual machine log manager socket", "DevicePolicy": "auto", "DirectoryMode": "0755", "FragmentPath": "/usr/lib/systemd/system/virtlogd.socket", "FreeBind": "no", "IOScheduling": "0", "IPTOS": "-1", "IPTTL": "-1", "Id": "virtlogd.socket", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-08-20 06:15:14 EDT", "InactiveExitTimestampMonotonic": "2856223", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KeepAlive": "no", "KeepAliveIntervalUSec": "0", "KeepAliveProbes": "0", "KeepAliveTimeUSec": "0", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "4096", "LimitNPROC": "22973", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22973", "LimitSTACK": "18446744073709551615", "ListenStream": "/var/run/libvirt/virtlogd-sock", "LoadState": "loaded", "Mark": "-1", "MaxConnections": "64", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "NAccepted": "0", "NConnections": "0", "Names": "virtlogd.socket", "NeedDaemonReload": "no", "Nice": "0", "NoDelay": "no", "NoNewPrivileges": "no", "NonBlocking": "no", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PassCredentials": "no", "PassSecurity": "no", "PipeSize": "0", "Priority": "-1", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "ReceiveBuffer": "0", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemoveOnStop": "no", "RequiredBy": "virtlogd.service libvirtd.service", "Requires": "sysinit.target -.mount", "RequiresMountsFor": "/var/run/libvirt/virtlogd-sock", "Result": "success", "ReusePort": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendBuffer": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "SocketMode": "0666", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StopWhenUnneeded": "no", "SubState": "listening", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Transparent": "no", "Triggers": "virtlogd.service", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "disabled", "Wants": "-.slice"}} >2018-08-20 06:23:04,051 p=1013 u=mistral | TASK [NTP settings] ************************************************************ >2018-08-20 06:23:04,052 p=1013 u=mistral | Monday 20 August 2018 06:23:04 -0400 (0:00:00.542) 0:03:46.082 ********* >2018-08-20 06:23:04,079 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:04,118 p=1013 u=mistral | ok: [compute-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["clock.redhat.com"]}, "changed": false} >2018-08-20 06:23:04,121 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:04,145 p=1013 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-08-20 06:23:04,145 p=1013 u=mistral | Monday 20 August 2018 06:23:04 -0400 (0:00:00.093) 0:03:46.175 ********* >2018-08-20 06:23:04,175 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:04,205 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:04,218 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:04,239 p=1013 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-08-20 06:23:04,239 p=1013 u=mistral | Monday 20 August 2018 06:23:04 -0400 (0:00:00.093) 0:03:46.269 ********* >2018-08-20 06:23:04,267 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:04,306 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,764 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "cmd": ["ntpdate", "-u", "clock.redhat.com"], "delta": "0:00:07.308983", "end": "2018-08-20 06:23:11.745181", "rc": 0, "start": "2018-08-20 06:23:04.436198", "stderr": "", "stderr_lines": [], "stdout": "20 Aug 06:23:11 ntpdate[14948]: adjust time server 10.11.160.238 offset 0.003051 sec", "stdout_lines": ["20 Aug 06:23:11 ntpdate[14948]: adjust time server 10.11.160.238 offset 0.003051 sec"]} >2018-08-20 06:23:11,789 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:23:11,789 p=1013 u=mistral | Monday 20 August 2018 06:23:11 -0400 (0:00:07.549) 0:03:53.819 ********* >2018-08-20 06:23:11,847 p=1013 u=mistral | skipping: [controller-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,849 p=1013 u=mistral | skipping: [controller-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,851 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,851 p=1013 u=mistral | skipping: [compute-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,864 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers/cinder) => {"changed": false, "item": "/var/log/containers/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,872 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/lib/cinder) => {"changed": false, "item": "/var/lib/cinder", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,897 p=1013 u=mistral | TASK [cinder logs readme] ****************************************************** >2018-08-20 06:23:11,897 p=1013 u=mistral | Monday 20 August 2018 06:23:11 -0400 (0:00:00.108) 0:03:53.927 ********* >2018-08-20 06:23:11,925 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,951 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,963 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:11,987 p=1013 u=mistral | TASK [ensure ceph configurations exist] **************************************** >2018-08-20 06:23:11,987 p=1013 u=mistral | Monday 20 August 2018 06:23:11 -0400 (0:00:00.089) 0:03:54.017 ********* >2018-08-20 06:23:12,014 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,041 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,052 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,073 p=1013 u=mistral | TASK [cinder_enable_iscsi_backend fact] **************************************** >2018-08-20 06:23:12,073 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.086) 0:03:54.103 ********* >2018-08-20 06:23:12,097 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,120 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,134 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,156 p=1013 u=mistral | TASK [cinder create LVM volume group dd] *************************************** >2018-08-20 06:23:12,156 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.082) 0:03:54.186 ********* >2018-08-20 06:23:12,183 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,206 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,216 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,237 p=1013 u=mistral | TASK [cinder create LVM volume group] ****************************************** >2018-08-20 06:23:12,237 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.080) 0:03:54.267 ********* >2018-08-20 06:23:12,260 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,287 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,298 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,323 p=1013 u=mistral | TASK [set_fact] **************************************************************** >2018-08-20 06:23:12,323 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.086) 0:03:54.353 ********* >2018-08-20 06:23:12,352 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,378 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,394 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,417 p=1013 u=mistral | TASK [include_role] ************************************************************ >2018-08-20 06:23:12,417 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.093) 0:03:54.447 ********* >2018-08-20 06:23:12,446 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,474 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,488 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,511 p=1013 u=mistral | TASK [stat /lib/systemd/system/iscsid.socket] ********************************** >2018-08-20 06:23:12,511 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.094) 0:03:54.541 ********* >2018-08-20 06:23:12,540 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,565 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,582 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,610 p=1013 u=mistral | TASK [Stop and disable iscsid.socket service] ********************************** >2018-08-20 06:23:12,611 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.099) 0:03:54.641 ********* >2018-08-20 06:23:12,688 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,716 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,728 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,751 p=1013 u=mistral | TASK [NTP settings] ************************************************************ >2018-08-20 06:23:12,752 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.140) 0:03:54.781 ********* >2018-08-20 06:23:12,780 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,807 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,820 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,844 p=1013 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-08-20 06:23:12,844 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.092) 0:03:54.874 ********* >2018-08-20 06:23:12,872 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,897 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,910 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,934 p=1013 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-08-20 06:23:12,934 p=1013 u=mistral | Monday 20 August 2018 06:23:12 -0400 (0:00:00.089) 0:03:54.964 ********* >2018-08-20 06:23:12,961 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:12,989 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,002 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,026 p=1013 u=mistral | TASK [set_fact] **************************************************************** >2018-08-20 06:23:13,026 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.092) 0:03:55.056 ********* >2018-08-20 06:23:13,054 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,078 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,092 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,113 p=1013 u=mistral | TASK [include_role] ************************************************************ >2018-08-20 06:23:13,113 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.087) 0:03:55.143 ********* >2018-08-20 06:23:13,139 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,162 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,175 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,197 p=1013 u=mistral | TASK [NTP settings] ************************************************************ >2018-08-20 06:23:13,197 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.083) 0:03:55.227 ********* >2018-08-20 06:23:13,223 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,246 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,266 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,291 p=1013 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-08-20 06:23:13,292 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.094) 0:03:55.321 ********* >2018-08-20 06:23:13,321 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,347 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,362 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,384 p=1013 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-08-20 06:23:13,384 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.092) 0:03:55.414 ********* >2018-08-20 06:23:13,412 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,440 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,451 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,474 p=1013 u=mistral | TASK [create persistent directories] ******************************************* >2018-08-20 06:23:13,474 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.089) 0:03:55.504 ********* >2018-08-20 06:23:13,504 p=1013 u=mistral | skipping: [controller-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,505 p=1013 u=mistral | skipping: [controller-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,538 p=1013 u=mistral | skipping: [controller-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,540 p=1013 u=mistral | skipping: [compute-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,541 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,542 p=1013 u=mistral | skipping: [compute-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,557 p=1013 u=mistral | skipping: [ceph-0] => (item=/srv/node) => {"changed": false, "item": "/srv/node", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,566 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/swift) => {"changed": false, "item": "/var/log/swift", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,572 p=1013 u=mistral | skipping: [ceph-0] => (item=/var/log/containers) => {"changed": false, "item": "/var/log/containers", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,595 p=1013 u=mistral | TASK [Set swift_use_local_disks fact] ****************************************** >2018-08-20 06:23:13,595 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.120) 0:03:55.625 ********* >2018-08-20 06:23:13,628 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,652 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,665 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,686 p=1013 u=mistral | TASK [Create Swift d1 directory if needed] ************************************* >2018-08-20 06:23:13,686 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.091) 0:03:55.716 ********* >2018-08-20 06:23:13,714 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,738 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,751 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,773 p=1013 u=mistral | TASK [Create swift logging symlink] ******************************************** >2018-08-20 06:23:13,773 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.086) 0:03:55.803 ********* >2018-08-20 06:23:13,800 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,825 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,839 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,863 p=1013 u=mistral | TASK [swift logs readme] ******************************************************* >2018-08-20 06:23:13,863 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.090) 0:03:55.893 ********* >2018-08-20 06:23:13,890 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,918 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,928 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:13,950 p=1013 u=mistral | TASK [Format SwiftRawDisks] **************************************************** >2018-08-20 06:23:13,950 p=1013 u=mistral | Monday 20 August 2018 06:23:13 -0400 (0:00:00.086) 0:03:55.980 ********* >2018-08-20 06:23:14,035 p=1013 u=mistral | TASK [Mount devices defined in SwiftRawDisks] ********************************** >2018-08-20 06:23:14,036 p=1013 u=mistral | Monday 20 August 2018 06:23:14 -0400 (0:00:00.085) 0:03:56.065 ********* >2018-08-20 06:23:14,116 p=1013 u=mistral | TASK [set_fact] **************************************************************** >2018-08-20 06:23:14,116 p=1013 u=mistral | Monday 20 August 2018 06:23:14 -0400 (0:00:00.080) 0:03:56.146 ********* >2018-08-20 06:23:14,146 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:14,174 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:14,217 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"container_registry_additional_sockets": ["/var/lib/openstack/docker.sock"], "container_registry_debug": true, "container_registry_deployment_user": "", "container_registry_docker_options": "--log-driver=journald --signature-verification=false --iptables=false --live-restore", "container_registry_insecure_registries": ["192.168.24.1:8787"], "container_registry_mirror": "", "container_registry_network_options": "--bip=172.31.0.1/24"}, "changed": false} >2018-08-20 06:23:14,245 p=1013 u=mistral | TASK [include_role] ************************************************************ >2018-08-20 06:23:14,245 p=1013 u=mistral | Monday 20 August 2018 06:23:14 -0400 (0:00:00.128) 0:03:56.275 ********* >2018-08-20 06:23:14,273 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:14,299 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:14,357 p=1013 u=mistral | TASK [container-registry : enable net.ipv4.ip_forward] ************************* >2018-08-20 06:23:14,357 p=1013 u=mistral | Monday 20 August 2018 06:23:14 -0400 (0:00:00.111) 0:03:56.387 ********* >2018-08-20 06:23:14,566 p=1013 u=mistral | changed: [ceph-0] => {"changed": true} >2018-08-20 06:23:14,587 p=1013 u=mistral | TASK [container-registry : ensure docker is installed] ************************* >2018-08-20 06:23:14,587 p=1013 u=mistral | Monday 20 August 2018 06:23:14 -0400 (0:00:00.229) 0:03:56.617 ********* >2018-08-20 06:23:15,080 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "msg": "", "rc": 0, "results": ["2:docker-1.13.1-74.git6e3bb8e.el7.x86_64 providing docker is already installed"]} >2018-08-20 06:23:15,101 p=1013 u=mistral | TASK [container-registry : manage /etc/systemd/system/docker.service.d] ******** >2018-08-20 06:23:15,101 p=1013 u=mistral | Monday 20 August 2018 06:23:15 -0400 (0:00:00.514) 0:03:57.131 ********* >2018-08-20 06:23:15,300 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/systemd/system/docker.service.d", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:15,327 p=1013 u=mistral | TASK [container-registry : unset mountflags] *********************************** >2018-08-20 06:23:15,327 p=1013 u=mistral | Monday 20 August 2018 06:23:15 -0400 (0:00:00.225) 0:03:57.357 ********* >2018-08-20 06:23:15,557 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0644", "msg": "section and option added", "owner": "root", "path": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "secontext": "unconfined_u:object_r:systemd_unit_file_t:s0", "size": 25, "state": "file", "uid": 0} >2018-08-20 06:23:15,576 p=1013 u=mistral | TASK [container-registry : configure OPTIONS in /etc/sysconfig/docker] ********* >2018-08-20 06:23:15,576 p=1013 u=mistral | Monday 20 August 2018 06:23:15 -0400 (0:00:00.248) 0:03:57.606 ********* >2018-08-20 06:23:15,805 p=1013 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-08-20 06:23:15,824 p=1013 u=mistral | TASK [container-registry : configure INSECURE_REGISTRY in /etc/sysconfig/docker] *** >2018-08-20 06:23:15,824 p=1013 u=mistral | Monday 20 August 2018 06:23:15 -0400 (0:00:00.247) 0:03:57.854 ********* >2018-08-20 06:23:16,043 p=1013 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line added"} >2018-08-20 06:23:16,061 p=1013 u=mistral | TASK [container-registry : Create additional socket directories] *************** >2018-08-20 06:23:16,061 p=1013 u=mistral | Monday 20 August 2018 06:23:16 -0400 (0:00:00.236) 0:03:58.091 ********* >2018-08-20 06:23:16,254 p=1013 u=mistral | changed: [ceph-0] => (item=/var/lib/openstack/docker.sock) => {"changed": true, "gid": 0, "group": "root", "item": "/var/lib/openstack/docker.sock", "mode": "0755", "owner": "root", "path": "/var/lib/openstack", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:16,313 p=1013 u=mistral | TASK [container-registry : manage /etc/docker/daemon.json] ********************* >2018-08-20 06:23:16,314 p=1013 u=mistral | Monday 20 August 2018 06:23:16 -0400 (0:00:00.252) 0:03:58.344 ********* >2018-08-20 06:23:16,849 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "d1771eedce1344ec4d3895016dc72907c117e86b", "dest": "/etc/docker/daemon.json", "gid": 0, "group": "root", "md5sum": "ae138a173e2cfb9312379cf88457c29e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:container_config_t:s0", "size": 20, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760596.36-148493936208665/source", "state": "file", "uid": 0} >2018-08-20 06:23:16,869 p=1013 u=mistral | TASK [container-registry : configure DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage] *** >2018-08-20 06:23:16,869 p=1013 u=mistral | Monday 20 August 2018 06:23:16 -0400 (0:00:00.555) 0:03:58.899 ********* >2018-08-20 06:23:17,192 p=1013 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-08-20 06:23:17,210 p=1013 u=mistral | TASK [container-registry : configure DOCKER_NETWORK_OPTIONS in /etc/sysconfig/docker-network] *** >2018-08-20 06:23:17,211 p=1013 u=mistral | Monday 20 August 2018 06:23:17 -0400 (0:00:00.341) 0:03:59.240 ********* >2018-08-20 06:23:17,533 p=1013 u=mistral | changed: [ceph-0] => {"backup": "", "changed": true, "msg": "line replaced"} >2018-08-20 06:23:17,553 p=1013 u=mistral | TASK [container-registry : ensure docker group exists] ************************* >2018-08-20 06:23:17,554 p=1013 u=mistral | Monday 20 August 2018 06:23:17 -0400 (0:00:00.343) 0:03:59.584 ********* >2018-08-20 06:23:17,771 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 1003, "name": "docker", "state": "present", "system": false} >2018-08-20 06:23:17,790 p=1013 u=mistral | TASK [container-registry : add deployment user to docker group] **************** >2018-08-20 06:23:17,790 p=1013 u=mistral | Monday 20 August 2018 06:23:17 -0400 (0:00:00.236) 0:03:59.820 ********* >2018-08-20 06:23:17,814 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:17,816 p=1013 u=mistral | RUNNING HANDLER [container-registry : restart docker] ************************** >2018-08-20 06:23:17,816 p=1013 u=mistral | Monday 20 August 2018 06:23:17 -0400 (0:00:00.025) 0:03:59.846 ********* >2018-08-20 06:23:18,043 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": ["/bin/true"], "delta": "0:00:00.002029", "end": "2018-08-20 06:23:17.998420", "rc": 0, "start": "2018-08-20 06:23:17.996391", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} >2018-08-20 06:23:18,044 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | reload systemd] ***************** >2018-08-20 06:23:18,044 p=1013 u=mistral | Monday 20 August 2018 06:23:18 -0400 (0:00:00.228) 0:04:00.074 ********* >2018-08-20 06:23:18,338 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "name": null, "status": {}} >2018-08-20 06:23:18,339 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | reload docker] ****************** >2018-08-20 06:23:18,339 p=1013 u=mistral | Monday 20 August 2018 06:23:18 -0400 (0:00:00.295) 0:04:00.369 ********* >2018-08-20 06:23:19,932 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestampMonotonic": "0", "ActiveExitTimestampMonotonic": "0", "ActiveState": "inactive", "After": "network.target registries.service basic.target rhel-push-plugin.socket docker-storage-setup.service systemd-journald.socket system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "no", "AssertTimestampMonotonic": "0", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "no", "ConditionTimestampMonotonic": "0", "Conflicts": "shutdown.target", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "0", "ExecMainStartTimestampMonotonic": "0", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestampMonotonic": "0", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22974", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "0", "MemoryAccounting": "no", "MemoryCurrent": "18446744073709551615", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target rhel-push-plugin.socket docker-cleanup.timer registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "dead", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "18446744073709551615", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}} >2018-08-20 06:23:19,933 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | pause while Docker restarts] **** >2018-08-20 06:23:19,933 p=1013 u=mistral | Monday 20 August 2018 06:23:19 -0400 (0:00:01.593) 0:04:01.963 ********* >2018-08-20 06:23:19,992 p=1013 u=mistral | Pausing for 10 seconds >2018-08-20 06:23:19,993 p=1013 u=mistral | (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) >2018-08-20 06:23:19,993 p=1013 u=mistral | [container-registry : Docker | pause while Docker restarts] >Waiting for docker restart: >2018-08-20 06:23:29,996 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "delta": 10, "echo": true, "rc": 0, "start": "2018-08-20 06:23:19.992614", "stderr": "", "stdout": "Paused for 10.0 seconds", "stop": "2018-08-20 06:23:29.992763", "user_input": ""} >2018-08-20 06:23:29,996 p=1013 u=mistral | RUNNING HANDLER [container-registry : Docker | wait for docker] **************** >2018-08-20 06:23:29,996 p=1013 u=mistral | Monday 20 August 2018 06:23:29 -0400 (0:00:10.063) 0:04:12.026 ********* >2018-08-20 06:23:30,240 p=1013 u=mistral | changed: [ceph-0] => {"attempts": 1, "changed": true, "cmd": ["/usr/bin/docker", "images"], "delta": "0:00:00.030342", "end": "2018-08-20 06:23:30.213988", "rc": 0, "start": "2018-08-20 06:23:30.183646", "stderr": "", "stderr_lines": [], "stdout": "REPOSITORY TAG IMAGE ID CREATED SIZE", "stdout_lines": ["REPOSITORY TAG IMAGE ID CREATED SIZE"]} >2018-08-20 06:23:30,259 p=1013 u=mistral | TASK [container-registry : enable and start docker] **************************** >2018-08-20 06:23:30,260 p=1013 u=mistral | Monday 20 August 2018 06:23:30 -0400 (0:00:00.263) 0:04:12.290 ********* >2018-08-20 06:23:30,556 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "enabled": true, "name": "docker", "state": "started", "status": {"ActiveEnterTimestamp": "Mon 2018-08-20 06:23:19 EDT", "ActiveEnterTimestampMonotonic": "470421340", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "network.target registries.service basic.target rhel-push-plugin.socket docker-storage-setup.service systemd-journald.socket system.slice", "AllowIsolate": "no", "AmbientCapabilities": "0", "AssertResult": "yes", "AssertTimestamp": "Mon 2018-08-20 06:23:18 EDT", "AssertTimestampMonotonic": "469206806", "Before": "paunch-container-shutdown.service shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "18446744073709551615", "CPUAccounting": "no", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "18446744073709551615", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "18446744073709551615", "ConditionResult": "yes", "ConditionTimestamp": "Mon 2018-08-20 06:23:18 EDT", "ConditionTimestampMonotonic": "469206806", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/docker.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "Docker Application Container Engine", "DevicePolicy": "auto", "Documentation": "http://docs.docker.com", "DropInPaths": "/etc/systemd/system/docker.service.d/99-unset-mountflags.conf", "Environment": "GOTRACEBACK=crash DOCKER_HTTP_HOST_COMPAT=1 PATH=/usr/libexec/docker:/usr/bin:/usr/sbin", "EnvironmentFile": "/etc/sysconfig/docker-network (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "13867", "ExecMainStartTimestamp": "Mon 2018-08-20 06:23:18 EDT", "ExecMainStartTimestampMonotonic": "469209040", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/bin/dockerd-current ; argv[]=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --authorization-plugin=rhel-push-plugin --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES ; ignore_errors=no ; start_time=[Mon 2018-08-20 06:23:18 EDT] ; stop_time=[n/a] ; pid=13867 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/docker.service", "GuessMainPID": "yes", "IOScheduling": "0", "Id": "docker.service", "IgnoreOnIsolate": "no", "IgnoreOnSnapshot": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Mon 2018-08-20 06:23:18 EDT", "InactiveExitTimestampMonotonic": "469209077", "JobTimeoutAction": "none", "JobTimeoutUSec": "0", "KillMode": "process", "KillSignal": "15", "LimitAS": "18446744073709551615", "LimitCORE": "18446744073709551615", "LimitCPU": "18446744073709551615", "LimitDATA": "18446744073709551615", "LimitFSIZE": "18446744073709551615", "LimitLOCKS": "18446744073709551615", "LimitMEMLOCK": "65536", "LimitMSGQUEUE": "819200", "LimitNICE": "0", "LimitNOFILE": "1048576", "LimitNPROC": "1048576", "LimitRSS": "18446744073709551615", "LimitRTPRIO": "0", "LimitRTTIME": "18446744073709551615", "LimitSIGPENDING": "22974", "LimitSTACK": "18446744073709551615", "LoadState": "loaded", "MainPID": "13867", "MemoryAccounting": "no", "MemoryCurrent": "64155648", "MemoryLimit": "18446744073709551615", "MountFlags": "0", "Names": "docker.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "PrivateDevices": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "ProtectHome": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RequiredBy": "docker-cleanup.service", "Requires": "basic.target rhel-push-plugin.socket docker-cleanup.timer registries.service", "Restart": "on-abnormal", "RestartUSec": "100ms", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitInterval": "10000000", "StartupBlockIOWeight": "18446744073709551615", "StartupCPUShares": "18446744073709551615", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "no", "TasksCurrent": "17", "TasksMax": "18446744073709551615", "TimeoutStartUSec": "0", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "notify", "UMask": "0022", "UnitFilePreset": "disabled", "UnitFileState": "disabled", "Wants": "docker-storage-setup.service system.slice", "WatchdogTimestamp": "Mon 2018-08-20 06:23:19 EDT", "WatchdogTimestampMonotonic": "470421156", "WatchdogUSec": "0"}} >2018-08-20 06:23:30,581 p=1013 u=mistral | TASK [NTP settings] ************************************************************ >2018-08-20 06:23:30,582 p=1013 u=mistral | Monday 20 August 2018 06:23:30 -0400 (0:00:00.321) 0:04:12.612 ********* >2018-08-20 06:23:30,610 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:30,635 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:30,671 p=1013 u=mistral | ok: [ceph-0] => {"ansible_facts": {"ntp_install_packages": false, "ntp_servers": ["clock.redhat.com"]}, "changed": false} >2018-08-20 06:23:30,693 p=1013 u=mistral | TASK [Install ntpdate] ********************************************************* >2018-08-20 06:23:30,693 p=1013 u=mistral | Monday 20 August 2018 06:23:30 -0400 (0:00:00.111) 0:04:12.723 ********* >2018-08-20 06:23:30,720 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:30,747 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:30,762 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:30,784 p=1013 u=mistral | TASK [Ensure system is NTP time synced] **************************************** >2018-08-20 06:23:30,784 p=1013 u=mistral | Monday 20 August 2018 06:23:30 -0400 (0:00:00.090) 0:04:12.814 ********* >2018-08-20 06:23:30,809 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:30,833 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:38,532 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "cmd": ["ntpdate", "-u", "clock.redhat.com"], "delta": "0:00:07.508124", "end": "2018-08-20 06:23:38.511458", "rc": 0, "start": "2018-08-20 06:23:31.003334", "stderr": "", "stderr_lines": [], "stdout": "20 Aug 06:23:38 ntpdate[13994]: adjust time server 10.11.160.238 offset 0.001593 sec", "stdout_lines": ["20 Aug 06:23:38 ntpdate[13994]: adjust time server 10.11.160.238 offset 0.001593 sec"]} >2018-08-20 06:23:38,539 p=1013 u=mistral | PLAY [External deployment step 1] ********************************************** >2018-08-20 06:23:38,554 p=1013 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-08-20 06:23:38,554 p=1013 u=mistral | Monday 20 August 2018 06:23:38 -0400 (0:00:07.770) 0:04:20.584 ********* >2018-08-20 06:23:38,590 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"blacklisted_hostnames": []}, "changed": false} >2018-08-20 06:23:38,604 p=1013 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-08-20 06:23:38,604 p=1013 u=mistral | Monday 20 August 2018 06:23:38 -0400 (0:00:00.049) 0:04:20.634 ********* >2018-08-20 06:23:38,776 p=1013 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": true, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "size": 6, "state": "directory", "uid": 42430} >2018-08-20 06:23:38,922 p=1013 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": true, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "size": 6, "state": "directory", "uid": 42430} >2018-08-20 06:23:39,183 p=1013 u=mistral | changed: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": true, "gid": 42430, "group": "mistral", "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "mode": "0755", "owner": "mistral", "path": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "size": 6, "state": "directory", "uid": 42430} >2018-08-20 06:23:39,201 p=1013 u=mistral | TASK [generate inventory] ****************************************************** >2018-08-20 06:23:39,201 p=1013 u=mistral | Monday 20 August 2018 06:23:39 -0400 (0:00:00.597) 0:04:21.231 ********* >2018-08-20 06:23:39,796 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "8a201930889e3f707edf16c652d954cf655a7a37", "dest": "/var/lib/mistral/overcloud/ceph-ansible/inventory.yml", "gid": 42430, "group": "mistral", "md5sum": "d7e08b2048c232bc1911a7520da033ba", "mode": "0644", "owner": "mistral", "size": 527, "src": "/tmp/ansible-/ansible-tmp-1534760619.52-121395163222105/source", "state": "file", "uid": 42430} >2018-08-20 06:23:39,809 p=1013 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-08-20 06:23:39,809 p=1013 u=mistral | Monday 20 August 2018 06:23:39 -0400 (0:00:00.608) 0:04:21.839 ********* >2018-08-20 06:23:39,850 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_all": {"ceph_conf_overrides": {"global": {"osd_pool_default_pg_num": 32, "osd_pool_default_pgp_num": 32, "osd_pool_default_size": 1, "rgw_keystone_accepted_roles": "Member, admin", "rgw_keystone_admin_domain": "default", "rgw_keystone_admin_password": "RHlB9GASqvJWMKVkKEzraCswi", "rgw_keystone_admin_project": "service", "rgw_keystone_admin_user": "swift", "rgw_keystone_api_version": 3, "rgw_keystone_implicit_tenants": "true", "rgw_keystone_revocation_interval": "0", "rgw_keystone_url": "http://172.17.1.24:5000", "rgw_s3_auth_use_keystone": "true"}}, "ceph_docker_image": "rhceph", "ceph_docker_image_tag": "3-11", "ceph_docker_registry": "192.168.24.1:8787", "ceph_origin": "distro", "ceph_stable": true, "cluster": "ceph", "cluster_network": "172.17.4.0/24", "containerized_deployment": true, "docker": true, "fsid": "00d03b50-a460-11e8-8cf1-525400721501", "generate_fsid": false, "ip_version": "ipv4", "keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQB3kXpbAAAAABAAcCPNLLBq5L8h/sbL3v6wkQ==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'", "osd": "allow rw"}, "key": "AQB3kXpbAAAAABAAxER5sPH7n06jJRAeMBD9HQ==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQB3kXpbAAAAABAAn7BFhvmwvmOaea/Tu5WRSA==", "mode": "0600", "name": "client.radosgw"}], "monitor_address_block": "172.17.3.0/24", "ntp_service_enabled": false, "openstack_config": true, "openstack_keys": [{"caps": {"mgr": "allow *", "mon": "profile rbd", "osd": "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"}, "key": "AQB3kXpbAAAAABAAcCPNLLBq5L8h/sbL3v6wkQ==", "mode": "0600", "name": "client.openstack"}, {"caps": {"mds": "allow *", "mgr": "allow *", "mon": "allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'", "osd": "allow rw"}, "key": "AQB3kXpbAAAAABAAxER5sPH7n06jJRAeMBD9HQ==", "mode": "0600", "name": "client.manila"}, {"caps": {"mgr": "allow *", "mon": "allow rw", "osd": "allow rwx"}, "key": "AQB3kXpbAAAAABAAn7BFhvmwvmOaea/Tu5WRSA==", "mode": "0600", "name": "client.radosgw"}], "openstack_pools": [{"application": "rbd", "name": "images", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "openstack_gnocchi", "name": "metrics", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "backups", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "vms", "pg_num": 32, "rule_name": "replicated_rule"}, {"application": "rbd", "name": "volumes", "pg_num": 32, "rule_name": "replicated_rule"}], "pools": [], "public_network": "172.17.3.0/24", "user_config": true}}, "changed": false} >2018-08-20 06:23:39,869 p=1013 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-08-20 06:23:39,869 p=1013 u=mistral | Monday 20 August 2018 06:23:39 -0400 (0:00:00.059) 0:04:21.899 ********* >2018-08-20 06:23:40,170 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "e5d1da1ca9d2f45299e6f32acd93910f2e52f415", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/all.yml", "gid": 42430, "group": "mistral", "md5sum": "f847c2a56cb82b4fa296c5b36ebc6a3a", "mode": "0644", "owner": "mistral", "size": 3078, "src": "/tmp/ansible-/ansible-tmp-1534760619.91-66412192523872/source", "state": "file", "uid": 42430} >2018-08-20 06:23:40,184 p=1013 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-08-20 06:23:40,184 p=1013 u=mistral | Monday 20 August 2018 06:23:40 -0400 (0:00:00.314) 0:04:22.214 ********* >2018-08-20 06:23:40,214 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_extra_vars": {"fetch_directory": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "ireallymeanit": "yes"}}, "changed": false} >2018-08-20 06:23:40,227 p=1013 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-08-20 06:23:40,227 p=1013 u=mistral | Monday 20 August 2018 06:23:40 -0400 (0:00:00.043) 0:04:22.257 ********* >2018-08-20 06:23:40,525 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "736efc435c358cb150f966050ebc3ab5061819cb", "dest": "/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml", "gid": 42430, "group": "mistral", "md5sum": "2bc808d342a6452fceb69c11f7bc8c1e", "mode": "0644", "owner": "mistral", "size": 88, "src": "/tmp/ansible-/ansible-tmp-1534760620.26-203614705760478/source", "state": "file", "uid": 42430} >2018-08-20 06:23:40,538 p=1013 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-08-20 06:23:40,538 p=1013 u=mistral | Monday 20 August 2018 06:23:40 -0400 (0:00:00.310) 0:04:22.568 ********* >2018-08-20 06:23:40,840 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_data.json", "gid": 42430, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "size": 2, "src": "/tmp/ansible-/ansible-tmp-1534760620.57-195486395905458/source", "state": "file", "uid": 42430} >2018-08-20 06:23:40,852 p=1013 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-08-20 06:23:40,852 p=1013 u=mistral | Monday 20 August 2018 06:23:40 -0400 (0:00:00.313) 0:04:22.882 ********* >2018-08-20 06:23:41,140 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "6295759c7c940d5f447c8f2aa21ca4b89c07424a", "dest": "/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_playbook.yml", "gid": 42430, "group": "mistral", "md5sum": "3e3401cf992ddfe2f64ba89ba32d2941", "mode": "0644", "owner": "mistral", "size": 527, "src": "/tmp/ansible-/ansible-tmp-1534760620.88-135311047513987/source", "state": "file", "uid": 42430} >2018-08-20 06:23:41,153 p=1013 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-08-20 06:23:41,153 p=1013 u=mistral | Monday 20 August 2018 06:23:41 -0400 (0:00:00.300) 0:04:23.183 ********* >2018-08-20 06:23:41,174 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:41,187 p=1013 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-08-20 06:23:41,187 p=1013 u=mistral | Monday 20 August 2018 06:23:41 -0400 (0:00:00.034) 0:04:23.217 ********* >2018-08-20 06:23:41,205 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:41,219 p=1013 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-08-20 06:23:41,220 p=1013 u=mistral | Monday 20 August 2018 06:23:41 -0400 (0:00:00.032) 0:04:23.250 ********* >2018-08-20 06:23:41,237 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:23:41,252 p=1013 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-08-20 06:23:41,252 p=1013 u=mistral | Monday 20 August 2018 06:23:41 -0400 (0:00:00.032) 0:04:23.282 ********* >2018-08-20 06:23:41,280 p=1013 u=mistral | skipping: [undercloud] => (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": false, "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "skip_reason": "Conditional result was False"} >2018-08-20 06:23:41,293 p=1013 u=mistral | TASK [set ceph-ansible group vars mgrs] **************************************** >2018-08-20 06:23:41,293 p=1013 u=mistral | Monday 20 August 2018 06:23:41 -0400 (0:00:00.040) 0:04:23.323 ********* >2018-08-20 06:23:41,329 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mgrs": {"ceph_mgr_docker_extra_env": "-e MGR_DASHBOARD=0"}}, "changed": false} >2018-08-20 06:23:41,343 p=1013 u=mistral | TASK [generate ceph-ansible group vars mgrs] *********************************** >2018-08-20 06:23:41,343 p=1013 u=mistral | Monday 20 August 2018 06:23:41 -0400 (0:00:00.049) 0:04:23.373 ********* >2018-08-20 06:23:41,646 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "06d130f3471f2ac09bb0161450878cf64bafd8af", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mgrs.yml", "gid": 42430, "group": "mistral", "md5sum": "0d3c03a4186ad82120a728e0470a49d9", "mode": "0644", "owner": "mistral", "size": 46, "src": "/tmp/ansible-/ansible-tmp-1534760621.38-70123081600894/source", "state": "file", "uid": 42430} >2018-08-20 06:23:41,659 p=1013 u=mistral | TASK [set ceph-ansible group vars mons] **************************************** >2018-08-20 06:23:41,659 p=1013 u=mistral | Monday 20 August 2018 06:23:41 -0400 (0:00:00.316) 0:04:23.689 ********* >2018-08-20 06:23:41,694 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_mons": {"admin_secret": "AQB3kXpbAAAAABAA3EzWcmU4bNI56dStFEmlaQ==", "monitor_secret": "AQB3kXpbAAAAABAAC+hojEKwUe4s3oIfifTbtw=="}}, "changed": false} >2018-08-20 06:23:41,708 p=1013 u=mistral | TASK [generate ceph-ansible group vars mons] *********************************** >2018-08-20 06:23:41,709 p=1013 u=mistral | Monday 20 August 2018 06:23:41 -0400 (0:00:00.049) 0:04:23.739 ********* >2018-08-20 06:23:41,998 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "6598ed9b1ec41345ec3e2668935d05a01d1438c4", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/mons.yml", "gid": 42430, "group": "mistral", "md5sum": "97da288c5225eb030acfe5f54f557572", "mode": "0644", "owner": "mistral", "size": 112, "src": "/tmp/ansible-/ansible-tmp-1534760621.74-244807462609774/source", "state": "file", "uid": 42430} >2018-08-20 06:23:42,011 p=1013 u=mistral | TASK [set ceph-ansible group vars clients] ************************************* >2018-08-20 06:23:42,011 p=1013 u=mistral | Monday 20 August 2018 06:23:42 -0400 (0:00:00.302) 0:04:24.041 ********* >2018-08-20 06:23:42,042 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_clients": {}}, "changed": false} >2018-08-20 06:23:42,056 p=1013 u=mistral | TASK [generate ceph-ansible group vars clients] ******************************** >2018-08-20 06:23:42,056 p=1013 u=mistral | Monday 20 August 2018 06:23:42 -0400 (0:00:00.044) 0:04:24.086 ********* >2018-08-20 06:23:42,345 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/clients.yml", "gid": 42430, "group": "mistral", "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0644", "owner": "mistral", "size": 2, "src": "/tmp/ansible-/ansible-tmp-1534760622.09-150487044203560/source", "state": "file", "uid": 42430} >2018-08-20 06:23:42,359 p=1013 u=mistral | TASK [set ceph-ansible group vars osds] **************************************** >2018-08-20 06:23:42,359 p=1013 u=mistral | Monday 20 August 2018 06:23:42 -0400 (0:00:00.303) 0:04:24.389 ********* >2018-08-20 06:23:42,392 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_group_vars_osds": {"devices": ["/dev/vdb", "/dev/vdc", "/dev/vdd", "/dev/vde", "/dev/vdf"], "journal_size": 512, "osd_objectstore": "filestore", "osd_scenario": "collocated"}}, "changed": false} >2018-08-20 06:23:42,406 p=1013 u=mistral | TASK [generate ceph-ansible group vars osds] *********************************** >2018-08-20 06:23:42,406 p=1013 u=mistral | Monday 20 August 2018 06:23:42 -0400 (0:00:00.046) 0:04:24.436 ********* >2018-08-20 06:23:42,713 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "checksum": "a209fd8d503be2b45dc87935a930c08a563088cb", "dest": "/var/lib/mistral/overcloud/ceph-ansible/group_vars/osds.yml", "gid": 42430, "group": "mistral", "md5sum": "114fe63af169ecb1b28b951266282ba7", "mode": "0644", "owner": "mistral", "size": 134, "src": "/tmp/ansible-/ansible-tmp-1534760622.44-86906007317697/source", "state": "file", "uid": 42430} >2018-08-20 06:23:42,719 p=1013 u=mistral | PLAY [Overcloud deploy step tasks for 1] *************************************** >2018-08-20 06:23:42,727 p=1013 u=mistral | PLAY [Overcloud common deploy step tasks 1] ************************************ >2018-08-20 06:23:42,755 p=1013 u=mistral | TASK [Create /var/lib/tripleo-config directory] ******************************** >2018-08-20 06:23:42,755 p=1013 u=mistral | Monday 20 August 2018 06:23:42 -0400 (0:00:00.349) 0:04:24.785 ********* >2018-08-20 06:23:43,068 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:43,126 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:43,153 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/tripleo-config", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:43,184 p=1013 u=mistral | TASK [Write the puppet step_config manifest] *********************************** >2018-08-20 06:23:43,184 p=1013 u=mistral | Monday 20 August 2018 06:23:43 -0400 (0:00:00.428) 0:04:25.214 ********* >2018-08-20 06:23:43,891 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "44355f328588ff032fb9d91a3fdf2a8f427f6ac1", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "d14bfa59823532755440579b4b515901", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1589, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760623.36-82841296308051/source", "state": "file", "uid": 0} >2018-08-20 06:23:44,027 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "466a8f2a86c39f07687a38e5228ba59c61ec5d19", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "a290d9fc287fa24e55411e78c56eb224", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1577, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760623.44-156177578968866/source", "state": "file", "uid": 0} >2018-08-20 06:23:44,030 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "8cc2a8154fe8261f1ad4dbbf7151db6f5d016a04", "dest": "/var/lib/tripleo-config/puppet_step_config.pp", "gid": 0, "group": "root", "md5sum": "ea4a5c9cd9eca53a460514b61dc3d011", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1631, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760623.45-184281941498659/source", "state": "file", "uid": 0} >2018-08-20 06:23:44,056 p=1013 u=mistral | TASK [Create /var/lib/docker-puppet] ******************************************* >2018-08-20 06:23:44,056 p=1013 u=mistral | Monday 20 August 2018 06:23:44 -0400 (0:00:00.872) 0:04:26.086 ********* >2018-08-20 06:23:44,346 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-08-20 06:23:44,410 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-08-20 06:23:44,410 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-puppet", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 30, "state": "directory", "uid": 0} >2018-08-20 06:23:44,431 p=1013 u=mistral | TASK [Write docker-puppet.json file] ******************************************* >2018-08-20 06:23:44,431 p=1013 u=mistral | Monday 20 August 2018 06:23:44 -0400 (0:00:00.374) 0:04:26.461 ********* >2018-08-20 06:23:45,039 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "46978cacb9f3a737b25d3f507ab0662845285378", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "c6ab0c4ef4187991f499e933ae372d69", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 234, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760624.54-9603204679967/source", "state": "file", "uid": 0} >2018-08-20 06:23:45,067 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "89a662a3c815dc77dd52b4d5eb5deb9ddb4b2256", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "c184e0e37063e09353f5d7120ba6fd07", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2288, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760624.53-150122679174332/source", "state": "file", "uid": 0} >2018-08-20 06:23:45,084 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "3e1b5638e39e468dcfb39798323751d15822a391", "dest": "/var/lib/docker-puppet/docker-puppet.json", "gid": 0, "group": "root", "md5sum": "1e7af3fa6563dc4dbae79fe8096b379d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 13304, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760624.52-30615045596385/source", "state": "file", "uid": 0} >2018-08-20 06:23:45,109 p=1013 u=mistral | TASK [Create /var/lib/docker-config-scripts] *********************************** >2018-08-20 06:23:45,109 p=1013 u=mistral | Monday 20 August 2018 06:23:45 -0400 (0:00:00.677) 0:04:27.139 ********* >2018-08-20 06:23:45,330 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:45,355 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:45,363 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/docker-config-scripts", "secontext": "unconfined_u:object_r:var_lib_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:45,388 p=1013 u=mistral | TASK [Clean old /var/lib/docker-container-startup-configs.json file] *********** >2018-08-20 06:23:45,388 p=1013 u=mistral | Monday 20 August 2018 06:23:45 -0400 (0:00:00.278) 0:04:27.418 ********* >2018-08-20 06:23:45,594 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-08-20 06:23:45,630 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-08-20 06:23:45,643 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "path": "/var/lib/docker-container-startup-configs.json", "state": "absent"} >2018-08-20 06:23:45,666 p=1013 u=mistral | TASK [Write docker config scripts] ********************************************* >2018-08-20 06:23:45,666 p=1013 u=mistral | Monday 20 August 2018 06:23:45 -0400 (0:00:00.278) 0:04:27.696 ********* >2018-08-20 06:23:46,275 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "(cellv2) Running cell_v2 host discovery"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | tr "," " "); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +"%s") + ${timeout} ))\necho "(cellv2) Waiting ${timeout} seconds for hosts to register"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo "(cellv2) compute node $host has registered"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in "${!discoverable_hosts[@]}"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo "(cellv2) compute node $host has not registered"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +"%s") ))\n if (( $finished == 1 )); then\n echo "(cellv2) All nodes registered"\n break\n elif (( $remaining <= 0 )); then\n echo "(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless"\n echo "(cellv2) Expected host list:" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e \'/^nil$/d\' | sort -u | tr \',\' \' \')\n echo "(cellv2) Detected host list:" $(openstack -q compute service list -c \'Host\' -c \'Zone\' -f value | awk \'$2 != "internal" { print $1 }\' | sort -u | tr \'\\n\', \' \')\n break\n else\n echo "(cellv2) Waiting ${remaining} seconds for hosts to register"\n sleep $loop_wait\n fi\ndone\necho "(cellv2) Running host discovery..."\nsu nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose"\n', 'mode': u'0700'}, 'key': u'nova_api_discover_hosts.sh'}) => {"changed": true, "checksum": "4e350e3d48cba294f2ccab34eb03c1dee23e7f82", "dest": "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_discover_hosts.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_domain_name)\nexport OS_USER_DOMAIN_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken user_domain_name)\nexport OS_PROJECT_NAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken project_name)\nexport OS_USERNAME=$(crudini --get /etc/nova/nova.conf keystone_authtoken username)\nexport OS_PASSWORD=$(crudini --get /etc/nova/nova.conf keystone_authtoken password)\nexport OS_AUTH_URL=$(crudini --get /etc/nova/nova.conf keystone_authtoken auth_url)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"(cellv2) Running cell_v2 host discovery\"\ntimeout=600\nloop_wait=30\ndeclare -A discoverable_hosts\nfor host in $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | tr \",\" \" \"); do discoverable_hosts[$host]=1; done\ntimeout_at=$(( $(date +\"%s\") + ${timeout} ))\necho \"(cellv2) Waiting ${timeout} seconds for hosts to register\"\nfinished=0\nwhile : ; do\n for host in $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }'); do\n if (( discoverable_hosts[$host] == 1 )); then\n echo \"(cellv2) compute node $host has registered\"\n unset discoverable_hosts[$host]\n fi\n done\n finished=1\n for host in \"${!discoverable_hosts[@]}\"; do\n if (( ${discoverable_hosts[$host]} == 1 )); then\n echo \"(cellv2) compute node $host has not registered\"\n finished=0\n fi\n done\n remaining=$(( $timeout_at - $(date +\"%s\") ))\n if (( $finished == 1 )); then\n echo \"(cellv2) All nodes registered\"\n break\n elif (( $remaining <= 0 )); then\n echo \"(cellv2) WARNING: timeout waiting for nodes to register, running host discovery regardless\"\n echo \"(cellv2) Expected host list:\" $(hiera -c /etc/puppet/hiera.yaml cellv2_discovery_hosts | sed -e '/^nil$/d' | sort -u | tr ',' ' ')\n echo \"(cellv2) Detected host list:\" $(openstack -q compute service list -c 'Host' -c 'Zone' -f value | awk '$2 != \"internal\" { print $1 }' | sort -u | tr '\\n', ' ')\n break\n else\n echo \"(cellv2) Waiting ${remaining} seconds for hosts to register\"\n sleep $loop_wait\n fi\ndone\necho \"(cellv2) Running host discovery...\"\nsu nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 discover_hosts --by-service --verbose\"\n", "mode": "0700"}}, "md5sum": "ed5dca102b28b4f992943612dee2dced", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760625.75-235706069175998/source", "state": "file", "uid": 0} >2018-08-20 06:23:46,299 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760625.79-245525556308520/source", "state": "file", "uid": 0} >2018-08-20 06:23:46,805 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'content': u'#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the "License"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger(\'nova_statedir\')\n\n\nclass PathManager(object):\n """Helper class to manipulate ownership of a given path"""\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return "uid: {} gid: {} path: {}{}".format(\n self.uid,\n self.gid,\n self.path,\n \'/\' if self.is_dir else \'\'\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info(\'Changing ownership of %s from %d:%d to %d:%d\',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info(\'Ownership of %s already %d:%d\',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n """Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n """\n def __init__(self, statedir, upgrade_marker=\'upgrade_marker\',\n nova_user=\'nova\'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info("Checking %s", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it\'s an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info(\'Applying nova statedir ownership\')\n LOG.info(\'Target ownership for %s: %d:%d\',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info("Checking %s", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info(\'Removing upgrade_marker %s\',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info(\'Nova statedir ownership complete\')\n\nif __name__ == \'__main__\':\n NovaStatedirOwnershipManager(\'/var/lib/nova\').run()\n', 'mode': u'0700'}, 'key': u'nova_statedir_ownership.py'}) => {"changed": true, "checksum": "052884875dafcd3e79ee18bebaed25f6994a1c37", "dest": "/var/lib/docker-config-scripts/nova_statedir_ownership.py", "gid": 0, "group": "root", "item": {"key": "nova_statedir_ownership.py", "value": {"content": "#!/usr/bin/env python\n#\n# Copyright 2018 Red Hat Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\nfrom __future__ import print_function\nimport logging\nimport os\nimport pwd\nimport stat\nimport sys\n\nlogging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nLOG = logging.getLogger('nova_statedir')\n\n\nclass PathManager(object):\n \"\"\"Helper class to manipulate ownership of a given path\"\"\"\n def __init__(self, path):\n self.path = path\n self._update()\n\n def _update(self):\n statinfo = os.stat(self.path)\n self.is_dir = stat.S_ISDIR(statinfo.st_mode)\n self.uid = statinfo.st_uid\n self.gid = statinfo.st_gid\n\n def __str__(self):\n return \"uid: {} gid: {} path: {}{}\".format(\n self.uid,\n self.gid,\n self.path,\n '/' if self.is_dir else ''\n )\n\n def has_owner(self, uid, gid):\n return self.uid == uid and self.gid == gid\n\n def has_either(self, uid, gid):\n return self.uid == uid or self.gid == gid\n\n def chown(self, uid, gid):\n target_uid = -1\n target_gid = -1\n if self.uid != uid:\n target_uid = uid\n if self.gid != gid:\n target_gid = gid\n if (target_uid, target_gid) != (-1, -1):\n LOG.info('Changing ownership of %s from %d:%d to %d:%d',\n self.path,\n self.uid,\n self.gid,\n self.uid if target_uid == -1 else target_uid,\n self.gid if target_gid == -1 else target_gid)\n os.chown(self.path, target_uid, target_gid)\n self._update()\n else:\n LOG.info('Ownership of %s already %d:%d',\n self.path,\n uid,\n gid)\n\n\nclass NovaStatedirOwnershipManager(object):\n \"\"\"Class to manipulate the ownership of the nova statedir (/var/lib/nova).\n\n The nova uid/gid differ on the host and container images. An upgrade\n that switches from host systemd services to docker requires a change in\n ownership. Previously this was a naive recursive chown, however this\n causes issues if nova instance are shared via an NFS mount: any open\n filehandles in qemu/libvirt fail with an I/O error (LP1778465).\n\n Instead the upgrade/FFU ansible tasks now lay down a marker file when\n stopping and disabling the host systemd services. We use this file to\n determine the host nova uid/gid. We then walk the tree and update any\n files that have the host uid/gid to the docker nova uid/gid. As files\n owned by root/qemu etc... are ignored this avoids the issues with open\n filehandles. The marker is removed once the tree has been walked.\n\n For subsequent runs, or for a new deployment, we simply ensure that the\n docker nova user/group owns all directories. This is required as the\n directories are created with root ownership in host_prep_tasks (the\n docker nova uid/gid is not known in this context).\n \"\"\"\n def __init__(self, statedir, upgrade_marker='upgrade_marker',\n nova_user='nova'):\n self.statedir = statedir\n self.nova_user = nova_user\n\n self.upgrade_marker_path = os.path.join(statedir, upgrade_marker)\n self.upgrade = os.path.exists(self.upgrade_marker_path)\n\n self.target_uid, self.target_gid = self._get_nova_ids()\n self.previous_uid, self.previous_gid = self._get_previous_nova_ids()\n self.id_change = (self.target_uid, self.target_gid) != \\\n (self.previous_uid, self.previous_gid)\n\n def _get_nova_ids(self):\n nova_uid, nova_gid = pwd.getpwnam(self.nova_user)[2:4]\n return nova_uid, nova_gid\n\n def _get_previous_nova_ids(self):\n if self.upgrade:\n statinfo = os.stat(self.upgrade_marker_path)\n return statinfo.st_uid, statinfo.st_gid\n else:\n return self._get_nova_ids()\n\n def _walk(self, top):\n for f in os.listdir(top):\n pathname = os.path.join(top, f)\n\n if pathname == self.upgrade_marker_path:\n continue\n\n pathinfo = PathManager(pathname)\n LOG.info(\"Checking %s\", pathinfo)\n if pathinfo.is_dir:\n # Always chown the directories\n pathinfo.chown(self.target_uid, self.target_gid)\n self._walk(pathname)\n elif self.id_change:\n # Only chown files if it's an upgrade and the file is owned by\n # the host nova uid/gid\n pathinfo.chown(\n self.target_uid if pathinfo.uid == self.previous_uid\n else pathinfo.uid,\n self.target_gid if pathinfo.gid == self.previous_gid\n else pathinfo.gid\n )\n\n def run(self):\n LOG.info('Applying nova statedir ownership')\n LOG.info('Target ownership for %s: %d:%d',\n self.statedir,\n self.target_uid,\n self.target_gid)\n\n pathinfo = PathManager(self.statedir)\n LOG.info(\"Checking %s\", pathinfo)\n pathinfo.chown(self.target_uid, self.target_gid)\n\n self._walk(self.statedir)\n\n if self.upgrade:\n LOG.info('Removing upgrade_marker %s',\n self.upgrade_marker_path)\n os.unlink(self.upgrade_marker_path)\n\n LOG.info('Nova statedir ownership complete')\n\nif __name__ == '__main__':\n NovaStatedirOwnershipManager('/var/lib/nova').run()\n", "mode": "0700"}}, "md5sum": "c8d51232f071c7b1fef053299a1b66c0", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6075, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760626.33-179683631795970/source", "state": "file", "uid": 0} >2018-08-20 06:23:46,809 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho "Check if secret already exists"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo "Failed to check secrets, check if Barbican in enabled and responding properly"\n exit $rc;\nfi\nif [ -z "$secret_href" ]; then\n echo "Create new secret"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type="application/octet-stream" --algorithm aes --bit-length 256 --mode ctr key -f value -c "Order href")\nfi\n', 'mode': u'0700'}, 'key': u'create_swift_secret.sh'}) => {"changed": true, "checksum": "e77b96beec241bb84928d298a778521376225c0d", "dest": "/var/lib/docker-config-scripts/create_swift_secret.sh", "gid": 0, "group": "root", "item": {"key": "create_swift_secret.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\n\necho \"Check if secret already exists\"\nsecret_href=$(openstack secret list --name swift_root_secret_uuid)\nrc=$?\nif [[ $rc != 0 ]]; then\n echo \"Failed to check secrets, check if Barbican in enabled and responding properly\"\n exit $rc;\nfi\nif [ -z \"$secret_href\" ]; then\n echo \"Create new secret\"\n order_href=$(openstack secret order create --name swift_root_secret_uuid --payload-content-type=\"application/octet-stream\" --algorithm aes --bit-length 256 --mode ctr key -f value -c \"Order href\")\nfi\n", "mode": "0700"}}, "md5sum": "9277d70c2fd62961998c5fce0a8aeee2", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1125, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760626.31-123124623263981/source", "state": "file", "uid": 0} >2018-08-20 06:23:47,286 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n', 'mode': u'0755'}, 'key': u'neutron_ovs_agent_launcher.sh'}) => {"changed": true, "checksum": "03f62b0a94bee17ece72ba1a3fc7577e68d9e6a4", "dest": "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh", "gid": 0, "group": "root", "item": {"key": "neutron_ovs_agent_launcher.sh", "value": {"content": "#!/bin/bash\nset -xe\n/usr/bin/python -m neutron.cmd.destroy_patch_ports --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent\n/usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-dir /etc/neutron/conf.d/common --log-file=/var/log/neutron/openvswitch-agent.log\n", "mode": "0755"}}, "md5sum": "1672c3fb89d576d045d5f3d5b23684c9", "mode": "0755", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760626.83-249305982112877/source", "state": "file", "uid": 0} >2018-08-20 06:23:47,819 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho "retrieve key_id"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ "$secret_href" ]; then\n echo "set key_id in keymaster.conf"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c "Secret href")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo "no key, wait for $loop_wait and check again"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho "Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly"\nexit 1\n', 'mode': u'0700'}, 'key': u'set_swift_keymaster_key_id.sh'}) => {"changed": true, "checksum": "9c2474fa6e4a8869674b689206eb1a1658a28fc6", "dest": "/var/lib/docker-config-scripts/set_swift_keymaster_key_id.sh", "gid": 0, "group": "root", "item": {"key": "set_swift_keymaster_key_id.sh", "value": {"content": "#!/bin/bash\nexport OS_PROJECT_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_domain_id)\nexport OS_USER_DOMAIN_ID=$(crudini --get /etc/swift/keymaster.conf kms_keymaster user_domain_id)\nexport OS_PROJECT_NAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster project_name)\nexport OS_USERNAME=$(crudini --get /etc/swift/keymaster.conf kms_keymaster username)\nexport OS_PASSWORD=$(crudini --get /etc/swift/keymaster.conf kms_keymaster password)\nexport OS_AUTH_URL=$(crudini --get /etc/swift/keymaster.conf kms_keymaster auth_endpoint)\nexport OS_AUTH_TYPE=password\nexport OS_IDENTITY_API_VERSION=3\necho \"retrieve key_id\"\nloop_wait=2\nfor i in {0..5}; do\n #TODO update uuid from mistral here too\n secret_href=$(openstack secret list --name swift_root_secret_uuid)\n if [ \"$secret_href\" ]; then\n echo \"set key_id in keymaster.conf\"\n secret_href=$(openstack secret list --name swift_root_secret_uuid -f value -c \"Secret href\")\n crudini --set /etc/swift/keymaster.conf kms_keymaster key_id ${secret_href##*/}\n exit 0\n else\n echo \"no key, wait for $loop_wait and check again\"\n sleep $loop_wait\n ((loop_wait++))\n fi\ndone\necho \"Failed to set secret in keymaster.conf, check if Barbican is enabled and responding properly\"\nexit 1\n", "mode": "0700"}}, "md5sum": "054225f8957e4457ef2103ce24d44b04", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1275, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760627.31-11579103086031/source", "state": "file", "uid": 0} >2018-08-20 06:23:48,338 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-\'\'}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho "{\\"step\\": ${STEP}}" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e "${CONFIG}"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n', 'mode': u'0700'}, 'key': u'docker_puppet_apply.sh'}) => {"changed": true, "checksum": "93afaa6df42c9ead7768b295fa901f83ae1b39ef", "dest": "/var/lib/docker-config-scripts/docker_puppet_apply.sh", "gid": 0, "group": "root", "item": {"key": "docker_puppet_apply.sh", "value": {"content": "#!/bin/bash\nset -eux\nSTEP=$1\nTAGS=$2\nCONFIG=$3\nEXTRA_ARGS=${4:-''}\nif [ -d /tmp/puppet-etc ]; then\n # ignore copy failures as these may be the same file depending on docker mounts\n cp -a /tmp/puppet-etc/* /etc/puppet || true\nfi\necho \"{\\\"step\\\": ${STEP}}\" > /etc/puppet/hieradata/docker.json\nexport FACTER_uuid=docker\nset +e\npuppet apply $EXTRA_ARGS \\\n --verbose \\\n --detailed-exitcodes \\\n --summarize \\\n --color=false \\\n --modulepath /etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules \\\n --tags $TAGS \\\n -e \"${CONFIG}\"\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n exit 0\nfi\nexit $rc\n", "mode": "0700"}}, "md5sum": "709b2caef95cc7486f9b851414e71133", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 653, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760627.85-174524043825203/source", "state": "file", "uid": 0} >2018-08-20 06:23:48,869 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'content': u'#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e \'1,3d\' -e \'$d\' | awk -F \' *| *\' \'$2 == "default" {print $4}\')\nif [ "$DEFID" ]; then\n echo "(cellv2) Updating default cell_v2 cell $DEFID"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default"\nelse\n echo "(cellv2) Creating default cell_v2 cell"\n su nova -s /bin/bash -c "/usr/bin/nova-manage cell_v2 create_cell --name=default"\nfi\n', 'mode': u'0700'}, 'key': u'nova_api_ensure_default_cell.sh'}) => {"changed": true, "checksum": "0a839197c2fa15204014befb1c771a17aea5bdd1", "dest": "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh", "gid": 0, "group": "root", "item": {"key": "nova_api_ensure_default_cell.sh", "value": {"content": "#!/bin/bash\nDEFID=$(nova-manage cell_v2 list_cells | sed -e '1,3d' -e '$d' | awk -F ' *| *' '$2 == \"default\" {print $4}')\nif [ \"$DEFID\" ]; then\n echo \"(cellv2) Updating default cell_v2 cell $DEFID\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 update_cell --cell_uuid $DEFID --name=default\"\nelse\n echo \"(cellv2) Creating default cell_v2 cell\"\n su nova -s /bin/bash -c \"/usr/bin/nova-manage cell_v2 create_cell --name=default\"\nfi\n", "mode": "0700"}}, "md5sum": "12a4a82656ddaae342942097b752d9db", "mode": "0700", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 442, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760628.37-160006985937953/source", "state": "file", "uid": 0} >2018-08-20 06:23:48,907 p=1013 u=mistral | TASK [Set docker_config_default fact] ****************************************** >2018-08-20 06:23:48,907 p=1013 u=mistral | Monday 20 August 2018 06:23:48 -0400 (0:00:03.240) 0:04:30.937 ********* >2018-08-20 06:23:48,984 p=1013 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:48,984 p=1013 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:48,994 p=1013 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,005 p=1013 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,014 p=1013 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,018 p=1013 u=mistral | ok: [controller-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,019 p=1013 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,026 p=1013 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,027 p=1013 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,033 p=1013 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,036 p=1013 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,041 p=1013 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,041 p=1013 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,055 p=1013 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,056 p=1013 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,070 p=1013 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,071 p=1013 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,084 p=1013 u=mistral | ok: [compute-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,085 p=1013 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,094 p=1013 u=mistral | ok: [ceph-0] => (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,095 p=1013 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,117 p=1013 u=mistral | TASK [Set docker_startup_configs_with_default fact] **************************** >2018-08-20 06:23:49,117 p=1013 u=mistral | Monday 20 August 2018 06:23:49 -0400 (0:00:00.210) 0:04:31.147 ********* >2018-08-20 06:23:49,231 p=1013 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,260 p=1013 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,712 p=1013 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:23:49,736 p=1013 u=mistral | TASK [Write docker-container-startup-configs] ********************************** >2018-08-20 06:23:49,737 p=1013 u=mistral | Monday 20 August 2018 06:23:49 -0400 (0:00:00.619) 0:04:31.767 ********* >2018-08-20 06:23:50,334 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "a11b37f646eb98d7b1c1098013e6f0147607f3e6", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "bd28080584d3e016b0f7a5a5a264b73a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 1055, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760629.85-108843521926340/source", "state": "file", "uid": 0} >2018-08-20 06:23:50,339 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "1a8955bd2468ae23951b0dfdbc771ea73f18b2fb", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "85a34aff853334deda9b9bb75b757484", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 105471, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760629.79-91406618701304/source", "state": "file", "uid": 0} >2018-08-20 06:23:50,361 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "7ed7aa1be3dc718918389e45f29715b788c682a9", "dest": "/var/lib/docker-container-startup-configs.json", "gid": 0, "group": "root", "md5sum": "f4f19530eff6acf3e7b7348d15ec249c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 12287, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760629.81-68646244668603/source", "state": "file", "uid": 0} >2018-08-20 06:23:50,387 p=1013 u=mistral | TASK [Write per-step docker-container-startup-configs] ************************* >2018-08-20 06:23:50,387 p=1013 u=mistral | Monday 20 August 2018 06:23:50 -0400 (0:00:00.650) 0:04:32.417 ********* >2018-08-20 06:23:51,016 p=1013 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760630.49-92963376690278/source", "state": "file", "uid": 0} >2018-08-20 06:23:51,029 p=1013 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_1'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760630.49-244297797406611/source", "state": "file", "uid": 0} >2018-08-20 06:23:51,048 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_image_tag': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_data_ownership': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2', 'command': [u'chown', u'-R', u'mysql:', u'/var/lib/mysql'], 'user': u'root', 'volumes': [u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'redis_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'mysql_bootstrap': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'DB_MAX_TIMEOUT=60', u'DB_CLUSTERCHECK_PASSWORD=6COJg84n4O2z1LTUty9zlNXHJ', u'DB_ROOT_PASSWORD=fefDKgMUht'], 'command': [u'bash', u'-ec', u'if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e "\\n[mysqld]\\nwsrep_provider=none" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c \'until mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" ping 2>/dev/null; do sleep 1; done\'\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "CREATE USER \'clustercheck\'@\'localhost\' IDENTIFIED BY \'${DB_CLUSTERCHECK_PASSWORD}\';"\nmysql -uroot -p"${DB_ROOT_PASSWORD}" -e "GRANT PROCESS ON *.* TO \'clustercheck\'@\'localhost\' WITH GRANT OPTION;"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p"${DB_ROOT_PASSWORD}" shutdown'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'detach': False}, 'haproxy_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'cinder_backup_image_tag': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u"/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/dev/shm:/dev/shm:rw', u'/etc/sysconfig/docker:/etc/sysconfig/docker:ro', u'/usr/bin:/usr/bin:ro', u'/var/run/docker.sock:/var/run/docker.sock:rw'], 'net': u'host', 'detach': False}, 'rabbitmq_bootstrap': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'KOLLA_BOOTSTRAP=True', u'RABBITMQ_CLUSTER_COOKIE=3GZFze7X5EhZt9WdCstc'], 'volumes': [u'/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro', u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/var/lib/rabbitmq:/var/lib/rabbitmq'], 'net': u'host', 'privileged': False}, 'memcached': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}}, 'key': u'step_1'}) => {"changed": true, "checksum": "7026f502d11262b6043d270d3f32b15eb9e7ffda", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_1.json", "gid": 0, "group": "root", "item": {"key": "step_1", "value": {"cinder_backup_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-cinder-backup:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "cinder_volume_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-cinder-volume:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "haproxy_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-haproxy:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "memcached": {"command": ["/bin/bash", "-c", "source /etc/sysconfig/memcached; /usr/bin/memcached -p ${PORT} -u ${USER} -m ${CACHESIZE} -c ${MAXCONN} $OPTIONS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-memcached:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/memcached/etc/sysconfig/memcached:/etc/sysconfig/memcached:ro"]}, "mysql_bootstrap": {"command": ["bash", "-ec", "if [ -e /var/lib/mysql/mysql ]; then exit 0; fi\necho -e \"\\n[mysqld]\\nwsrep_provider=none\" >> /etc/my.cnf\nkolla_set_configs\nsudo -u mysql -E kolla_extend_start\nmysqld_safe --skip-networking --wsrep-on=OFF &\ntimeout ${DB_MAX_TIMEOUT} /bin/bash -c 'until mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" ping 2>/dev/null; do sleep 1; done'\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '${DB_CLUSTERCHECK_PASSWORD}';\"\nmysql -uroot -p\"${DB_ROOT_PASSWORD}\" -e \"GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' WITH GRANT OPTION;\"\ntimeout ${DB_MAX_TIMEOUT} mysqladmin -uroot -p\"${DB_ROOT_PASSWORD}\" shutdown"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "DB_MAX_TIMEOUT=60", "DB_CLUSTERCHECK_PASSWORD=6COJg84n4O2z1LTUty9zlNXHJ", "DB_ROOT_PASSWORD=fefDKgMUht"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/mysql.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "mysql_data_ownership": {"command": ["chown", "-R", "mysql:", "/var/lib/mysql"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", "net": "host", "start_order": 0, "user": "root", "volumes": ["/var/lib/mysql:/var/lib/mysql"]}, "mysql_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-mariadb:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "rabbitmq_bootstrap": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "KOLLA_BOOTSTRAP=True", "RABBITMQ_CLUSTER_COOKIE=3GZFze7X5EhZt9WdCstc"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", "net": "host", "privileged": false, "start_order": 0, "volumes": ["/var/lib/kolla/config_files/rabbitmq.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro", "/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/var/lib/rabbitmq:/var/lib/rabbitmq"]}, "rabbitmq_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-rabbitmq:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}, "redis_image_tag": {"command": ["/bin/bash", "-c", "/usr/bin/docker tag '192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2' '192.168.24.1:8787/rhosp14/openstack-redis:pcmklatest'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/dev/shm:/dev/shm:rw", "/etc/sysconfig/docker:/etc/sysconfig/docker:ro", "/usr/bin:/usr/bin:ro", "/var/run/docker.sock:/var/run/docker.sock:rw"]}}}, "md5sum": "f828a03ffd14f3d7f1b5278b0430f8d8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6913, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760630.47-156687231311722/source", "state": "file", "uid": 0} >2018-08-20 06:23:51,544 p=1013 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_3'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760631.02-126782093925051/source", "state": "file", "uid": 0} >2018-08-20 06:23:51,561 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'nova_statedir_owner': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2', 'command': u'/docker-config-scripts/nova_statedir_ownership.py', 'user': u'root', 'volumes': [u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/docker-config-scripts/:/docker-config-scripts/'], 'detach': False, 'privileged': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_libvirt': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/var/log/containers/libvirt:/var/log/libvirt', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro', u'/var/lib/vhost_sockets:/var/lib/vhost_sockets', u'/sys/fs/selinux:/sys/fs/selinux'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_virtlogd': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/dev:/dev', u'/run:/run', u'/sys/fs/cgroup:/sys/fs/cgroup', u'/var/lib/nova:/var/lib/nova:shared', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt', u'/etc/libvirt/qemu:/etc/libvirt/qemu:ro', u'/var/log/libvirt/qemu:/var/log/libvirt/qemu'], 'net': u'host', 'privileged': True, 'restart': u'always'}}, 'key': u'step_3'}) => {"changed": true, "checksum": "4a91a094bab7af2d8bffe546796d668799dbaa2a", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_libvirt": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-08-17.2", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_libvirt.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/var/log/containers/libvirt:/var/log/libvirt", "/var/log/libvirt/qemu:/var/log/libvirt/qemu:ro", "/var/lib/vhost_sockets:/var/lib/vhost_sockets", "/sys/fs/selinux:/sys/fs/selinux"]}, "nova_statedir_owner": {"command": "/docker-config-scripts/nova_statedir_ownership.py", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", "privileged": false, "user": "root", "volumes": ["/var/lib/nova:/var/lib/nova:shared", "/var/lib/docker-config-scripts/:/docker-config-scripts/"]}, "nova_virtlogd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-08-17.2", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/dev:/dev", "/run:/run", "/sys/fs/cgroup:/sys/fs/cgroup", "/var/lib/nova:/var/lib/nova:shared", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt", "/etc/libvirt/qemu:/etc/libvirt/qemu:ro", "/var/log/libvirt/qemu:/var/log/libvirt/qemu"]}}}, "md5sum": "b55c6d5d8e74c129ea1a838af85b5bf3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 5428, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760631.02-272343575883285/source", "state": "file", "uid": 0} >2018-08-20 06:23:51,609 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'nova_placement': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd', u'/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'restart': u'always'}, 'swift_rsync_fix': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'sed -i "/pid file/d" /var/lib/kolla/config_files/src/etc/rsyncd.conf'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw'], 'net': u'host', 'detach': False}, 'nova_db_sync': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'heat_engine_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-08-17.2', 'command': u"/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'swift_copy_rings': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2', 'detach': False, 'command': [u'/bin/bash', u'-c', u'cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups'], 'user': u'root', 'volumes': [u'/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw', u'/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro']}, 'nova_api_ensure_default_cell': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro'], 'net': u'host', 'detach': False}, 'keystone_cron': {'start_order': 4, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/bin/bash', u'-c', u'/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'panko_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2', 'command': u"/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/panko/etc/panko:/etc/panko:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_backup_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'nova_api_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'iscsid': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', u'/dev/:/dev/', u'/run/:/run/', u'/sys:/sys', u'/lib/modules:/lib/modules:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'keystone_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'/usr/bin/bootstrap_host_exec', u'keystone', u'/usr/local/bin/kolla_start'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'detach': False, 'privileged': False}, 'ceilometer_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'start_order': 0, 'volumes': [u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'user': u'root'}, 'keystone': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd', u'/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2', 'command': u'/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_init_logs': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'user': u'root', 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'privileged': False}, 'neutron_ovs_bridge': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': [u'puppet', u'apply', u'--modulepath', u'/etc/puppet/modules:/usr/share/openstack-puppet/modules', u'--tags', u'file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config', u'-v', u'-e', u'include neutron::agents::ml2::ovs'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/etc/puppet:/etc/puppet:ro', u'/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro', u'/var/run/openvswitch/:/var/run/openvswitch/'], 'net': u'host', 'detach': False, 'privileged': True}, 'cinder_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_api', u"su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'detach': False, 'privileged': False}, 'nova_api_map_cell0': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'command': u"/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro'], 'net': u'host', 'detach': False}, 'glance_api_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2', 'environment': [u'KOLLA_BOOTSTRAP=True', u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'command': u"/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'detach': False, 'privileged': False}, 'neutron_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2', 'command': [u'/usr/bin/bootstrap_host_exec', u'neutron_api', u'neutron-db-manage', u'upgrade', u'heads'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro', u'/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro'], 'net': u'host', 'detach': False, 'privileged': False}, 'sahara_db_sync': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2', 'command': u"/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'detach': False, 'privileged': False}, 'keystone_bootstrap': {'action': u'exec', 'start_order': 3, 'command': [u'keystone', u'/usr/bin/bootstrap_host_exec', u'keystone', u'keystone-manage', u'bootstrap', u'--bootstrap-password', u'PIu5Ro9KCYzlHc3VGOqS1iIZZ'], 'user': u'root'}, 'horizon': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS', u'ENABLE_IRONIC=yes', u'ENABLE_MANILA=yes', u'ENABLE_HEAT=yes', u'ENABLE_MISTRAL=yes', u'ENABLE_OCTAVIA=yes', u'ENABLE_SAHARA=yes', u'ENABLE_CLOUDKITTY=no', u'ENABLE_FREEZER=no', u'ENABLE_FWAAS=no', u'ENABLE_KARBOR=no', u'ENABLE_DESIGNATE=no', u'ENABLE_MAGNUM=no', u'ENABLE_MURANO=no', u'ENABLE_NEUTRON_LBAAS=no', u'ENABLE_SEARCHLIGHT=no', u'ENABLE_SENLIN=no', u'ENABLE_SOLUM=no', u'ENABLE_TACKER=no', u'ENABLE_TROVE=no', u'ENABLE_WATCHER=no', u'ENABLE_ZAQAR=no', u'ENABLE_ZUN=no'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/www/:/var/www/:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_setup_srv': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2', 'command': [u'chown', u'-R', u'swift:', u'/srv/node'], 'user': u'root', 'volumes': [u'/srv/node:/srv/node']}}, 'key': u'step_3'}) => {"changed": true, "checksum": "a725e2c1abc44632a76b242853367048e9747c93", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": {"aodh_db_sync": {"command": "/usr/bin/bootstrap_host_exec aodh_api su aodh -s /bin/bash -c /usr/bin/aodh-dbsync", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/aodh/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/aodh/etc/aodh/:/etc/aodh/:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "ceilometer_init_log": {"command": ["/bin/bash", "-c", "chown -R ceilometer:ceilometer /var/log/ceilometer"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-08-17.2", "start_order": 0, "user": "root", "volumes": ["/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_api", "su cinder -s /bin/bash -c 'cinder-manage db sync --bump-versions'"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/cinder/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/cinder/etc/cinder/:/etc/cinder/:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_backup_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "cinder_volume_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "glance_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec glance_api su glance -s /bin/bash -c '/usr/local/bin/kolla_start'", "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_engine_db_sync": {"command": "/usr/bin/bootstrap_host_exec heat_engine su heat -s /bin/bash -c 'heat-manage db_sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-08-17.2", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/config-data/heat/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/heat/etc/heat/:/etc/heat/:ro"]}, "horizon": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS", "ENABLE_IRONIC=yes", "ENABLE_MANILA=yes", "ENABLE_HEAT=yes", "ENABLE_MISTRAL=yes", "ENABLE_OCTAVIA=yes", "ENABLE_SAHARA=yes", "ENABLE_CLOUDKITTY=no", "ENABLE_FREEZER=no", "ENABLE_FWAAS=no", "ENABLE_KARBOR=no", "ENABLE_DESIGNATE=no", "ENABLE_MAGNUM=no", "ENABLE_MURANO=no", "ENABLE_NEUTRON_LBAAS=no", "ENABLE_SEARCHLIGHT=no", "ENABLE_SENLIN=no", "ENABLE_SOLUM=no", "ENABLE_TACKER=no", "ENABLE_TROVE=no", "ENABLE_WATCHER=no", "ENABLE_ZAQAR=no", "ENABLE_ZUN=no"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/horizon.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/horizon/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/www/:/var/www/:ro", "", ""]}, "iscsid": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro", "/dev/:/dev/", "/run/:/run/", "/sys:/sys", "/lib/modules:/lib/modules:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro"]}, "keystone": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "keystone_bootstrap": {"action": "exec", "command": ["keystone", "/usr/bin/bootstrap_host_exec", "keystone", "keystone-manage", "bootstrap", "--bootstrap-password", "PIu5Ro9KCYzlHc3VGOqS1iIZZ"], "start_order": 3, "user": "root"}, "keystone_cron": {"command": ["/bin/bash", "-c", "/usr/local/bin/kolla_set_configs && /usr/sbin/crond -n"], "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "start_order": 4, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro"]}, "keystone_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "keystone", "/usr/local/bin/kolla_start"], "detach": false, "environment": ["KOLLA_BOOTSTRAP=True", "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd", "/var/lib/kolla/config_files/keystone.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/keystone/:/var/lib/kolla/config_files/src:ro", "", ""]}, "neutron_db_sync": {"command": ["/usr/bin/bootstrap_host_exec", "neutron_api", "neutron-db-manage", "upgrade", "heads"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/config-data/neutron/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/neutron/etc/neutron:/etc/neutron:ro", "/var/lib/config-data/neutron/usr/share/neutron:/usr/share/neutron:ro"]}, "neutron_ovs_bridge": {"command": ["puppet", "apply", "--modulepath", "/etc/puppet/modules:/usr/share/openstack-puppet/modules", "--tags", "file,file_line,concat,augeas,neutron::plugins::ovs::bridge,vs_config", "-v", "-e", "include neutron::agents::ml2::ovs"], "detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", "net": "host", "pid": "host", "privileged": true, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/etc/puppet:/etc/puppet:ro", "/usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro", "/var/run/openvswitch/:/var/run/openvswitch/"]}, "nova_api_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage api_db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_api_ensure_default_cell": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_ensure_default_cell.sh", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_ensure_default_cell.sh:/nova_api_ensure_default_cell.sh:ro"]}, "nova_api_map_cell0": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage cell_v2 map_cell0'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_db_sync": {"command": "/usr/bin/bootstrap_host_exec nova_api su nova -s /bin/bash -c '/usr/bin/nova-manage db sync'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "net": "host", "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro"]}, "nova_placement": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2", "net": "host", "restart": "always", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd", "/var/lib/kolla/config_files/nova_placement.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_placement/:/var/lib/kolla/config_files/src:ro", "", ""]}, "panko_db_sync": {"command": "/usr/bin/bootstrap_host_exec panko_api su panko -s /bin/bash -c '/usr/bin/panko-dbsync '", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/config-data/panko/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/panko/etc/panko:/etc/panko:ro"]}, "sahara_db_sync": {"command": "/usr/bin/bootstrap_host_exec sahara_api su sahara -s /bin/bash -c 'sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head'", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2", "net": "host", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/sahara/etc/sahara/:/etc/sahara/:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_copy_rings": {"command": ["/bin/bash", "-c", "cp -v -a -t /etc/swift /swift_ringbuilder/etc/swift/*.gz /swift_ringbuilder/etc/swift/*.builder /swift_ringbuilder/etc/swift/backups"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/etc/swift:/etc/swift:rw", "/var/lib/config-data/swift_ringbuilder:/swift_ringbuilder:ro"]}, "swift_rsync_fix": {"command": ["/bin/bash", "-c", "sed -i \"/pid file/d\" /var/lib/kolla/config_files/src/etc/rsyncd.conf"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2", "net": "host", "user": "root", "volumes": ["/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:rw"]}, "swift_setup_srv": {"command": ["chown", "-R", "swift:", "/srv/node"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2", "user": "root", "volumes": ["/srv/node:/srv/node"]}}}, "md5sum": "3f895791b24cb2d0966866027a8a4571", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 22165, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760631.03-171973965493280/source", "state": "file", "uid": 0} >2018-08-20 06:23:52,056 p=1013 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760631.56-246991022037051/source", "state": "file", "uid": 0} >2018-08-20 06:23:52,068 p=1013 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_2'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760631.56-268204432499454/source", "state": "file", "uid": 0} >2018-08-20 06:23:52,156 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'gnocchi_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/log/gnocchi'], 'user': u'root', 'volumes': [u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd']}, 'mysql_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534759508'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/mysql:/var/lib/mysql:rw'], 'net': u'host', 'detach': False}, 'gnocchi_init_lib': {'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R gnocchi:gnocchi /var/lib/gnocchi'], 'user': u'root', 'volumes': [u'/var/lib/gnocchi:/var/lib/gnocchi']}, 'cinder_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'user': u'root'}, 'create_dnsmasq_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-08-17.2', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::dhcp_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'panko_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R panko:panko /var/log/panko'], 'user': u'root', 'volumes': [u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd']}, 'redis_init_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534759508'], 'config_volume': u'redis_init_bundle', 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'cinder_scheduler_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R cinder:cinder /var/log/cinder'], 'privileged': False, 'volumes': [u'/var/log/containers/cinder:/var/log/cinder'], 'user': u'root'}, 'glance_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R glance:glance /var/log/glance'], 'privileged': False, 'volumes': [u'/var/log/containers/glance:/var/log/glance'], 'user': u'root'}, 'clustercheck': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro', u'/var/lib/mysql:/var/lib/mysql'], 'net': u'host', 'restart': u'always'}, 'haproxy_init_bundle': {'start_order': 3, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534759508'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation', u'include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro', u'/etc/sysconfig:/etc/sysconfig:rw', u'/usr/libexec/iptables:/usr/libexec/iptables:ro', u'/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False, 'privileged': True}, 'neutron_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R neutron:neutron /var/log/neutron'], 'privileged': False, 'volumes': [u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd'], 'user': u'root'}, 'mysql_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2', 'config_volume': u'mysql', 'command': [u'/usr/bin/bootstrap_host_exec', u'mysql', u'if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo "galera-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'rabbitmq_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534759508'], 'command': [u'/docker_puppet_apply.sh', u'2', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle', u'--debug'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/bin/true:/bin/epmd'], 'net': u'host', 'detach': False}, 'nova_api_init_logs': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd'], 'user': u'root'}, 'haproxy_restart_bundle': {'start_order': 2, 'image': u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2', 'config_volume': u'haproxy', 'command': [u'/usr/bin/bootstrap_host_exec', u'haproxy', u'if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo "haproxy-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'create_keepalived_wrapper': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-08-17.2', 'pid': u'host', 'command': [u'/docker_puppet_apply.sh', u'4', u'file', u'include ::tripleo::profile::base::neutron::l3_agent_wrappers'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'detach': False}, 'rabbitmq_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2', 'config_volume': u'rabbitmq', 'command': [u'/usr/bin/bootstrap_host_exec', u'rabbitmq', u'if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo "rabbitmq-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'horizon_fix_perms': {'image': u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard'], 'user': u'root', 'volumes': [u'/var/log/containers/horizon:/var/log/horizon', u'/var/log/containers/httpd/horizon:/var/log/httpd', u'/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard']}, 'aodh_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R aodh:aodh /var/log/aodh'], 'user': u'root', 'volumes': [u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd']}, 'nova_metadata_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'privileged': False, 'volumes': [u'/var/log/containers/nova:/var/log/nova'], 'user': u'root'}, 'redis_restart_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2', 'config_volume': u'redis', 'command': [u'/usr/bin/bootstrap_host_exec', u'redis', u'if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo "redis-bundle restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'heat_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R heat:heat /var/log/heat'], 'user': u'root', 'volumes': [u'/var/log/containers/heat:/var/log/heat']}, 'nova_placement_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R nova:nova /var/log/nova'], 'start_order': 1, 'volumes': [u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-placement:/var/log/httpd'], 'user': u'root'}, 'keystone_init_log': {'image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u'chown -R keystone:keystone /var/log/keystone'], 'start_order': 1, 'volumes': [u'/var/log/containers/keystone:/var/log/keystone', u'/var/log/containers/httpd/keystone:/var/log/httpd'], 'user': u'root'}}, 'key': u'step_2'}) => {"changed": true, "checksum": "96e03c4331ec19a825613201eb62096da712b203", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_2.json", "gid": 0, "group": "root", "item": {"key": "step_2", "value": {"aodh_init_log": {"command": ["/bin/bash", "-c", "chown -R aodh:aodh /var/log/aodh"], "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", "user": "root", "volumes": ["/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd"]}, "cinder_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler_init_logs": {"command": ["/bin/bash", "-c", "chown -R cinder:cinder /var/log/cinder"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-08-17.2", "privileged": false, "user": "root", "volumes": ["/var/log/containers/cinder:/var/log/cinder"]}, "clustercheck": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", "net": "host", "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/clustercheck.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/clustercheck/:/var/lib/kolla/config_files/src:ro", "/var/lib/mysql:/var/lib/mysql"]}, "create_dnsmasq_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::dhcp_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-08-17.2", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "create_keepalived_wrapper": {"command": ["/docker_puppet_apply.sh", "4", "file", "include ::tripleo::profile::base::neutron::l3_agent_wrappers"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-08-17.2", "net": "host", "pid": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron"]}, "glance_init_logs": {"command": ["/bin/bash", "-c", "chown -R glance:glance /var/log/glance"], "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2", "privileged": false, "user": "root", "volumes": ["/var/log/containers/glance:/var/log/glance"]}, "gnocchi_init_lib": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/lib/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", "user": "root", "volumes": ["/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_init_log": {"command": ["/bin/bash", "-c", "chown -R gnocchi:gnocchi /var/log/gnocchi"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", "user": "root", "volumes": ["/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd"]}, "haproxy_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,tripleo::firewall::rule,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ip,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker; include ::tripleo::profile::pacemaker::haproxy_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1534759508"], "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2", "net": "host", "privileged": true, "start_order": 3, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro", "/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro", "/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro", "/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro", "/etc/sysconfig:/etc/sysconfig:rw", "/usr/libexec/iptables:/usr/libexec/iptables:ro", "/usr/libexec/initscripts/legacy-actions:/usr/libexec/initscripts/legacy-actions:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "haproxy_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "haproxy", "if /usr/sbin/pcs resource show haproxy-bundle; then /usr/sbin/pcs resource restart --wait=600 haproxy-bundle; echo \"haproxy-bundle restart invoked\"; fi"], "config_volume": "haproxy", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/haproxy/:/var/lib/kolla/config_files/src:ro"]}, "heat_init_log": {"command": ["/bin/bash", "-c", "chown -R heat:heat /var/log/heat"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-08-17.2", "user": "root", "volumes": ["/var/log/containers/heat:/var/log/heat"]}, "horizon_fix_perms": {"command": ["/bin/bash", "-c", "touch /var/log/horizon/horizon.log && chown -R apache:apache /var/log/horizon && chmod -R a+rx /etc/openstack-dashboard"], "image": "192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2", "user": "root", "volumes": ["/var/log/containers/horizon:/var/log/horizon", "/var/log/containers/httpd/horizon:/var/log/httpd", "/var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard:/etc/openstack-dashboard"]}, "keystone_init_log": {"command": ["/bin/bash", "-c", "chown -R keystone:keystone /var/log/keystone"], "image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/keystone:/var/log/keystone", "/var/log/containers/httpd/keystone:/var/log/httpd"]}, "mysql_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,galera_ready,mysql_database,mysql_grant,mysql_user", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::mysql_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1534759508"], "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/mysql:/var/lib/mysql:rw"]}, "mysql_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "mysql", "if /usr/sbin/pcs resource show galera-bundle; then /usr/sbin/pcs resource restart --wait=600 galera-bundle; echo \"galera-bundle restart invoked\"; fi"], "config_volume": "mysql", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/mysql/:/var/lib/kolla/config_files/src:ro"]}, "neutron_init_logs": {"command": ["/bin/bash", "-c", "chown -R neutron:neutron /var/log/neutron"], "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", "privileged": false, "user": "root", "volumes": ["/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd"]}, "nova_api_init_logs": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd"]}, "nova_metadata_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "privileged": false, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova"]}, "nova_placement_init_log": {"command": ["/bin/bash", "-c", "chown -R nova:nova /var/log/nova"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2", "start_order": 1, "user": "root", "volumes": ["/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-placement:/var/log/httpd"]}, "panko_init_log": {"command": ["/bin/bash", "-c", "chown -R panko:panko /var/log/panko"], "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2", "user": "root", "volumes": ["/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd"]}, "rabbitmq_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation,rabbitmq_policy,rabbitmq_user,rabbitmq_ready", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::rabbitmq_bundle", "--debug"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1534759508"], "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/bin/true:/bin/epmd"]}, "rabbitmq_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "rabbitmq", "if /usr/sbin/pcs resource show rabbitmq-bundle; then /usr/sbin/pcs resource restart --wait=600 rabbitmq-bundle; echo \"rabbitmq-bundle restart invoked\"; fi"], "config_volume": "rabbitmq", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/rabbitmq/:/var/lib/kolla/config_files/src:ro"]}, "redis_init_bundle": {"command": ["/docker_puppet_apply.sh", "2", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::resource::ocf,pacemaker::constraint::order,pacemaker::constraint::colocation", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::database::redis_bundle", "--debug"], "config_volume": "redis_init_bundle", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1534759508"], "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2", "net": "host", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "redis_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "redis", "if /usr/sbin/pcs resource show redis-bundle; then /usr/sbin/pcs resource restart --wait=600 redis-bundle; echo \"redis-bundle restart invoked\"; fi"], "config_volume": "redis", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/redis/:/var/lib/kolla/config_files/src:ro"]}}}, "md5sum": "5c2565ef5950c41e3d311bc25f82f887", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 17318, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760631.59-150492441667757/source", "state": "file", "uid": 0} >2018-08-20 06:23:52,589 p=1013 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760632.08-216241489991812/source", "state": "file", "uid": 0} >2018-08-20 06:23:52,607 p=1013 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_5'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760632.07-195432739505547/source", "state": "file", "uid": 0} >2018-08-20 06:23:52,729 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'cinder_volume_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534759508'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_api': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'gnocchi_statsd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'cinder_backup_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_backup', u'if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo "openstack-cinder-backup restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'gnocchi_metricd': {'start_order': 1, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/gnocchi:/var/lib/gnocchi'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_discover_hosts': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534759508'], 'command': u'/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh', 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro', u'/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro'], 'net': u'host', 'detach': False}, 'ceilometer_gnocchi_upgrade': {'start_order': 99, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2', 'command': [u'/usr/bin/bootstrap_host_exec', u'ceilometer_agent_central', u"su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'detach': False, 'privileged': False}, 'cinder_volume_restart_bundle': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2', 'config_volume': u'cinder', 'command': [u'/usr/bin/bootstrap_host_exec', u'cinder_volume', u'if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo "openstack-cinder-volume restart invoked"; fi'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'detach': False}, 'cinder_backup_init_bundle': {'start_order': 1, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2', 'environment': [u'TRIPLEO_DEPLOY_IDENTIFIER=1534759508'], 'command': [u'/docker_puppet_apply.sh', u'5', u'file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location', u'include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle', u'--debug --verbose'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro', u'/etc/puppet:/tmp/puppet-etc:ro', u'/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', u'/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro', u'/dev/shm:/dev/shm:rw'], 'net': u'host', 'detach': False}, 'gnocchi_db_sync': {'start_order': 0, 'image': u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro', u'/var/lib/gnocchi:/var/lib/gnocchi', u'/var/log/containers/gnocchi:/var/log/gnocchi', u'/var/log/containers/httpd/gnocchi-api:/var/log/httpd', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro'], 'net': u'host', 'detach': False, 'privileged': False}}, 'key': u'step_5'}) => {"changed": true, "checksum": "c341d4e8e71ae30f21c56721c2415039def1078b", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_5.json", "gid": 0, "group": "root", "item": {"key": "step_5", "value": {"ceilometer_gnocchi_upgrade": {"command": ["/usr/bin/bootstrap_host_exec", "ceilometer_agent_central", "su ceilometer -s /bin/bash -c 'for n in {1..10}; do /usr/bin/ceilometer-upgrade --skip-metering-database && exit 0 || sleep 30; done; exit 1'"], "detach": false, "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", "net": "host", "privileged": false, "start_order": 99, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/ceilometer/etc/ceilometer/:/etc/ceilometer/:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_backup_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::backup_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1534759508"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_backup_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_backup", "if /usr/sbin/pcs resource show openstack-cinder-backup; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-backup; echo \"openstack-cinder-backup restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "cinder_volume_init_bundle": {"command": ["/docker_puppet_apply.sh", "5", "file,file_line,concat,augeas,pacemaker::resource::bundle,pacemaker::property,pacemaker::constraint::location", "include ::tripleo::profile::base::pacemaker;include ::tripleo::profile::pacemaker::cinder::volume_bundle", "--debug --verbose"], "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1534759508"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/var/lib/docker-config-scripts/docker_puppet_apply.sh:/docker_puppet_apply.sh:ro", "/etc/puppet:/tmp/puppet-etc:ro", "/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw"]}, "cinder_volume_restart_bundle": {"command": ["/usr/bin/bootstrap_host_exec", "cinder_volume", "if /usr/sbin/pcs resource show openstack-cinder-volume; then /usr/sbin/pcs resource restart --wait=600 openstack-cinder-volume; echo \"openstack-cinder-volume restart invoked\"; fi"], "config_volume": "cinder", "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2", "net": "host", "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/etc/corosync/corosync.conf:/etc/corosync/corosync.conf:ro", "/dev/shm:/dev/shm:rw", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro"]}, "gnocchi_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/lib/kolla/config_files/gnocchi_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "", ""]}, "gnocchi_db_sync": {"detach": false, "environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", "net": "host", "privileged": false, "start_order": 0, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_db_sync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/lib/gnocchi:/var/lib/gnocchi", "/var/log/containers/gnocchi:/var/log/gnocchi", "/var/log/containers/httpd/gnocchi-api:/var/log/httpd", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro"]}, "gnocchi_metricd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-metricd:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_metricd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "gnocchi_statsd": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-gnocchi-statsd:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "start_order": 1, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/gnocchi_statsd.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/gnocchi/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/gnocchi:/var/log/gnocchi", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/gnocchi:/var/lib/gnocchi"]}, "nova_api_discover_hosts": {"command": "/usr/bin/bootstrap_host_exec nova_api /nova_api_discover_hosts.sh", "detach": false, "environment": ["TRIPLEO_DEPLOY_IDENTIFIER=1534759508"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "net": "host", "start_order": 1, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/lib/config-data/nova/etc/my.cnf.d/tripleo.cnf:/etc/my.cnf.d/tripleo.cnf:ro", "/var/lib/config-data/nova/etc/nova/:/etc/nova/:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/docker-config-scripts/nova_api_discover_hosts.sh:/nova_api_discover_hosts.sh:ro"]}}}, "md5sum": "da3cc4646ee7d24dbcd571060a63e15d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 11741, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760632.15-87186733954418/source", "state": "file", "uid": 0} >2018-08-20 06:23:53,146 p=1013 u=mistral | changed: [ceph-0] => (item={'value': {'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "787e0a03efaae710627bead42d1364d33efc7c6f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}}}, "md5sum": "9c758158437d5c760c2f29ac896a57b0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 973, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760632.6-415073312589/source", "state": "file", "uid": 0} >2018-08-20 06:23:53,173 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'ceilometer_agent_compute': {'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/run/libvirt:/var/run/libvirt:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_libvirt_init_secret': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-08-17.2', 'command': [u'/bin/bash', u'-c', u"/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '00d03b50-a460-11e8-8cf1-525400721501' --base64 'AQB3kXpbAAAAABAAcCPNLLBq5L8h/sbL3v6wkQ=='"], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro', u'/etc/libvirt:/etc/libvirt', u'/var/run/libvirt:/var/run/libvirt', u'/var/lib/libvirt:/var/lib/libvirt'], 'detach': False, 'privileged': False}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_migration_target': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/ssh/:/host-ssh/:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_compute': {'ipc': u'host', 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro', u'/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/dev:/dev', u'/lib/modules:/lib/modules:ro', u'/run:/run', u'/var/lib/nova:/var/lib/nova:shared', u'/var/lib/libvirt:/var/lib/libvirt', u'/sys/class/net:/sys/class/net', u'/sys/bus/pci:/sys/bus/pci'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "99a595f9e9b4d4c95c566c5cfe3bdc23074f9920", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"ceilometer_agent_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-compute:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/run/libvirt:/var/run/libvirt:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-08-17.2", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_compute": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", "ipc": "host", "net": "host", "privileged": true, "restart": "always", "ulimit": ["nofile=1024"], "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/dev:/dev", "/lib/modules:/lib/modules:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared", "/var/lib/libvirt:/var/lib/libvirt", "/sys/class/net:/sys/class/net", "/sys/bus/pci:/sys/bus/pci"]}, "nova_libvirt_init_secret": {"command": ["/bin/bash", "-c", "/usr/bin/virsh secret-define --file /etc/nova/secret.xml && /usr/bin/virsh secret-set-value --secret '00d03b50-a460-11e8-8cf1-525400721501' --base64 'AQB3kXpbAAAAABAAcCPNLLBq5L8h/sbL3v6wkQ=='"], "detach": false, "image": "192.168.24.1:8787/rhosp14/openstack-nova-libvirt:2018-08-17.2", "privileged": false, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova:ro", "/etc/libvirt:/etc/libvirt", "/var/run/libvirt:/var/run/libvirt", "/var/lib/libvirt:/var/lib/libvirt"]}, "nova_migration_target": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova_libvirt/:/var/lib/kolla/config_files/src:ro", "/etc/ssh/:/host-ssh/:ro", "/run:/run", "/var/lib/nova:/var/lib/nova:shared"]}}}, "md5sum": "e91498dc448a9ebe399f1e2f214cb940", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 6779, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760632.61-160378122350178/source", "state": "file", "uid": 0} >2018-08-20 06:23:53,361 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'swift_container_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_evaluator': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'/run:/run'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'cinder_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_proxy': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/run:/run', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'restart': u'always'}, 'neutron_dhcp': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', u'/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_object_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_metadata_agent': {'start_order': 10, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/neutron:/var/lib/neutron'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'ceilometer_agent_central': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'keystone_refresh': {'action': u'exec', 'start_order': 1, 'command': [u'keystone', u'pkill', u'--signal', u'USR1', u'httpd'], 'user': u'root'}, 'swift_account_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'aodh_notifier': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_consoleauth': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'glance_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/glance:/var/log/glance', u'/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro', u'/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro', u'/var/lib/glance:/var/lib/glance:slave'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_reaper': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'ceilometer_agent_notification': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro', u'/var/log/containers/ceilometer:/var/log/ceilometer'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_vnc_proxy': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_rsync': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'nova_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/log/containers/httpd/nova-api:/var/log/httpd', u'/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'aodh_api': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh', u'/var/log/containers/httpd/aodh-api:/var/log/httpd', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_metadata': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'nova', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'heat_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'neutron_l3_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch', u'/var/lib/neutron:/var/lib/neutron', u'/run/netns:/run/netns:shared', u'/var/lib/openstack:/var/lib/openstack', u'/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro', u'/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro', u'/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_scheduler': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'nova_conductor': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/nova:/var/log/nova', u'/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_server': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'sahara_api': {'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/lib/modules:/lib/modules:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'sahara_engine': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json', u'/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro', u'/var/lib/sahara:/var/lib/sahara', u'/var/log/containers/sahara:/var/log/sahara'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_ovs_agent': {'start_order': 10, 'ulimit': [u'nofile=1024'], 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro', u'/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro', u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch'], 'net': u'host', 'privileged': True, 'restart': u'always'}, 'cinder_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/cinder:/var/log/cinder', u'/var/log/containers/httpd/cinder-api:/var/log/httpd'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_account_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_container_replicator': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_updater': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'swift_object_expirer': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'heat_api_cron': {'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'swift_container_auditor': {'image': u'192.168.24.1:8787/rhosp14/openstack-swift-container:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'swift', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro', u'/srv/node:/srv/node', u'/dev:/dev', u'/var/cache/swift:/var/cache/swift'], 'net': u'host', 'restart': u'always'}, 'panko_api': {'start_order': 2, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/panko:/var/log/panko', u'/var/log/containers/httpd/panko-api:/var/log/httpd', u'/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'aodh_listener': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers/aodh:/var/log/aodh'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'neutron_api': {'start_order': 0, 'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/neutron:/var/log/neutron', u'/var/log/containers/httpd/neutron-api:/var/log/httpd', u'/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro'], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'heat_api_cfn': {'healthcheck': {'test': u'/openstack/healthcheck'}, 'image': u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-08-17.2', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/log/containers/heat:/var/log/heat', u'/var/log/containers/httpd/heat-api-cfn:/var/log/httpd', u'/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro', u'', u''], 'net': u'host', 'privileged': False, 'restart': u'always'}, 'logrotate_crond': {'image': u'192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2', 'pid': u'host', 'environment': [u'KOLLA_CONFIG_STRATEGY=COPY_ALWAYS'], 'user': u'root', 'volumes': [u'/etc/hosts:/etc/hosts:ro', u'/etc/localtime:/etc/localtime:ro', u'/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', u'/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', u'/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', u'/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', u'/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', u'/dev/log:/dev/log', u'/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', u'/etc/puppet:/etc/puppet:ro', u'/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', u'/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro', u'/var/log/containers:/var/log/containers'], 'net': u'none', 'privileged': True, 'restart': u'always'}}, 'key': u'step_4'}) => {"changed": true, "checksum": "bf71121ae16bc0528a439673a808192ff820fb8d", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_4.json", "gid": 0, "group": "root", "item": {"key": "step_4", "value": {"aodh_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh", "/var/log/containers/httpd/aodh-api:/var/log/httpd", "", ""]}, "aodh_evaluator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-evaluator:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_evaluator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_listener": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-listener:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_listener.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "aodh_notifier": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-aodh-notifier:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/aodh_notifier.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/aodh/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/aodh:/var/log/aodh"]}, "ceilometer_agent_central": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_central.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "ceilometer_agent_notification": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-ceilometer-notification:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/ceilometer_agent_notification.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/ceilometer/:/var/lib/kolla/config_files/src:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src-panko:ro", "/var/log/containers/ceilometer:/var/log/ceilometer"]}, "cinder_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd", "", ""]}, "cinder_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder", "/var/log/containers/httpd/cinder-api:/var/log/httpd"]}, "cinder_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-cinder-scheduler:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/cinder_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/cinder/:/var/lib/kolla/config_files/src:ro", "/var/log/containers/cinder:/var/log/cinder"]}, "glance_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/glance:/var/log/glance", "/var/lib/kolla/config_files/glance_api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/glance_api/:/var/lib/kolla/config_files/src:ro", "/etc/ceph:/var/lib/kolla/config_files/src-ceph:ro", "/var/lib/glance:/var/lib/glance:slave"]}, "heat_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cfn": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api-cfn:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cfn.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api_cfn/:/var/lib/kolla/config_files/src:ro", "", ""]}, "heat_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/log/containers/httpd/heat-api:/var/log/httpd", "/var/lib/kolla/config_files/heat_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat_api/:/var/lib/kolla/config_files/src:ro"]}, "heat_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-heat-engine:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/heat:/var/log/heat", "/var/lib/kolla/config_files/heat_engine.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/heat/:/var/lib/kolla/config_files/src:ro"]}, "keystone_refresh": {"action": "exec", "command": ["keystone", "pkill", "--signal", "USR1", "httpd"], "start_order": 1, "user": "root"}, "logrotate_crond": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", "net": "none", "pid": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/crond/:/var/lib/kolla/config_files/src:ro", "/var/log/containers:/var/log/containers"]}, "neutron_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "start_order": 0, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/log/containers/httpd/neutron-api:/var/log/httpd", "/var/lib/kolla/config_files/neutron_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro"]}, "neutron_dhcp": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-dhcp-agent:2018-08-17.2", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_dhcp.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro", "/var/lib/neutron/dhcp_haproxy_wrapper:/usr/local/bin/haproxy:ro"]}, "neutron_l3_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-l3-agent:2018-08-17.2", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_l3_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch", "/var/lib/neutron:/var/lib/neutron", "/run/netns:/run/netns:shared", "/var/lib/openstack:/var/lib/openstack", "/var/lib/neutron/keepalived_wrapper:/usr/local/bin/keepalived:ro", "/var/lib/neutron/l3_haproxy_wrapper:/usr/local/bin/haproxy:ro", "/var/lib/neutron/dibbler_wrapper:/usr/local/bin/dibbler_client:ro"]}, "neutron_metadata_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-metadata-agent:2018-08-17.2", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/neutron:/var/lib/neutron"]}, "neutron_ovs_agent": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-neutron-openvswitch-agent:2018-08-17.2", "net": "host", "pid": "host", "privileged": true, "restart": "always", "start_order": 10, "ulimit": ["nofile=1024"], "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/neutron:/var/log/neutron", "/var/lib/kolla/config_files/neutron_ovs_agent.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/neutron/:/var/lib/kolla/config_files/src:ro", "/var/lib/docker-config-scripts/neutron_ovs_agent_launcher.sh:/neutron_ovs_agent_launcher.sh:ro", "/lib/modules:/lib/modules:ro", "/run/openvswitch:/run/openvswitch"]}, "nova_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "", ""]}, "nova_api_cron": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/log/containers/httpd/nova-api:/var/log/httpd", "/var/lib/kolla/config_files/nova_api_cron.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_conductor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-conductor:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_conductor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_consoleauth": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-consoleauth:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_consoleauth.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_metadata": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", "net": "host", "privileged": true, "restart": "always", "start_order": 2, "user": "nova", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_metadata.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "nova_scheduler": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-scheduler:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_scheduler.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro", "/run:/run"]}, "nova_vnc_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-nova-novncproxy:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/nova:/var/log/nova", "/var/lib/kolla/config_files/nova_vnc_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/nova/:/var/lib/kolla/config_files/src:ro"]}, "panko_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "start_order": 2, "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/log/containers/panko:/var/log/panko", "/var/log/containers/httpd/panko-api:/var/log/httpd", "/var/lib/kolla/config_files/panko_api.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/panko/:/var/lib/kolla/config_files/src:ro", "", ""]}, "sahara_api": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-api.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/lib/modules:/lib/modules:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "sahara_engine": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-sahara-engine:2018-08-17.2", "net": "host", "privileged": false, "restart": "always", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/sahara-engine.json:/var/lib/kolla/config_files/config.json", "/var/lib/config-data/puppet-generated/sahara/:/var/lib/kolla/config_files/src:ro", "/var/lib/sahara:/var/lib/sahara", "/var/log/containers/sahara:/var/log/sahara"]}, "swift_account_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_reaper": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_reaper.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_account_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-account:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_account_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_container_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-container:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_container_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_auditor": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_auditor.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_expirer": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_expirer.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_replicator": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_replicator.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_server": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_server.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_object_updater": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2", "net": "host", "restart": "always", "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_object_updater.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev", "/var/cache/swift:/var/cache/swift"]}, "swift_proxy": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "healthcheck": {"test": "/openstack/healthcheck"}, "image": "192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", "net": "host", "restart": "always", "start_order": 2, "user": "swift", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_proxy.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/run:/run", "/srv/node:/srv/node", "/dev:/dev"]}, "swift_rsync": {"environment": ["KOLLA_CONFIG_STRATEGY=COPY_ALWAYS"], "image": "192.168.24.1:8787/rhosp14/openstack-swift-object:2018-08-17.2", "net": "host", "privileged": true, "restart": "always", "user": "root", "volumes": ["/etc/hosts:/etc/hosts:ro", "/etc/localtime:/etc/localtime:ro", "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro", "/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro", "/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro", "/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro", "/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro", "/dev/log:/dev/log", "/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro", "/etc/puppet:/etc/puppet:ro", "/var/lib/kolla/config_files/swift_rsync.json:/var/lib/kolla/config_files/config.json:ro", "/var/lib/config-data/puppet-generated/swift/:/var/lib/kolla/config_files/src:ro", "/srv/node:/srv/node", "/dev:/dev"]}}}, "md5sum": "d9e9ee618f38697b7b082dbd94af4690", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 47260, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760632.73-74092601381301/source", "state": "file", "uid": 0} >2018-08-20 06:23:53,710 p=1013 u=mistral | changed: [ceph-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760633.17-233176447802282/source", "state": "file", "uid": 0} >2018-08-20 06:23:53,739 p=1013 u=mistral | changed: [compute-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760633.17-84694854131062/source", "state": "file", "uid": 0} >2018-08-20 06:23:53,860 p=1013 u=mistral | changed: [controller-0] => (item={'value': {}, 'key': u'step_6'}) => {"changed": true, "checksum": "bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f", "dest": "/var/lib/tripleo-config/docker-container-startup-config-step_6.json", "gid": 0, "group": "root", "item": {"key": "step_6", "value": {}}, "md5sum": "99914b932bd37a50b983c5e7c90ae93b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 2, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760633.32-250159909746071/source", "state": "file", "uid": 0} >2018-08-20 06:23:54,226 p=1013 u=mistral | TASK [Create /var/lib/kolla/config_files directory] **************************** >2018-08-20 06:23:54,226 p=1013 u=mistral | Monday 20 August 2018 06:23:54 -0400 (0:00:03.839) 0:04:36.256 ********* >2018-08-20 06:23:54,487 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:54,509 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:54,512 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/lib/kolla/config_files", "secontext": "unconfined_u:object_r:container_file_t:s0", "size": 6, "state": "directory", "uid": 0} >2018-08-20 06:23:54,540 p=1013 u=mistral | TASK [Write kolla config json files] ******************************************* >2018-08-20 06:23:54,540 p=1013 u=mistral | Monday 20 August 2018 06:23:54 -0400 (0:00:00.313) 0:04:36.570 ********* >2018-08-20 06:23:55,224 p=1013 u=mistral | changed: [ceph-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760634.66-85500474032008/source", "state": "file", "uid": 0} >2018-08-20 06:23:55,229 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760634.68-45161214740278/source", "state": "file", "uid": 0} >2018-08-20 06:23:55,335 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -s -n'}, 'key': u'/var/lib/kolla/config_files/logrotate-crond.json'}) => {"changed": true, "checksum": "4c92019f9e75a1d5fd8ed0c534a1e2e37545fd52", "dest": "/var/lib/kolla/config_files/logrotate-crond.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/logrotate-crond.json", "value": {"command": "/usr/sbin/crond -s -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "4e44fe0987e7b03113435c6eed7ea3b5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 160, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760634.81-188546522299642/source", "state": "file", "uid": 0} >2018-08-20 06:23:55,709 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760635.23-67369807086941/source", "state": "file", "uid": 0} >2018-08-20 06:23:55,869 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/keystone.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/keystone.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760635.33-153997364385957/source", "state": "file", "uid": 0} >2018-08-20 06:23:56,238 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/libvirtd', 'permissions': [{'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_libvirt.json'}) => {"changed": true, "checksum": "b50cbe1f8b020aa49249248b57310f45005813b3", "dest": "/var/lib/kolla/config_files/nova_libvirt.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_libvirt.json", "value": {"command": "/usr/sbin/libvirtd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "8356787bbcfcb5674a0bf2570719654a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 512, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760635.71-216600449068312/source", "state": "file", "uid": 0} >2018-08-20 06:23:56,380 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/lib/cinder', 'recurse': True}, {'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_backup.json'}) => {"changed": true, "checksum": "0e697e31bdc439b99552bac9ffe0bab07f2af4a4", "dest": "/var/lib/kolla/config_files/cinder_backup.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_backup.json", "value": {"command": "/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/lib/cinder", "recurse": true}, {"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "8e107eb8f6989be8375a0ff2dd5b4d57", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 651, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760635.86-13463801617730/source", "state": "file", "uid": 0} >2018-08-20 06:23:56,737 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ssh/', 'owner': u'root', 'perm': u'0600', 'source': u'/host-ssh/ssh_host_*_key'}], 'command': u'/usr/sbin/sshd -D -p 2022'}, 'key': u'/var/lib/kolla/config_files/nova-migration-target.json'}) => {"changed": true, "checksum": "6a0a936a324363cd605e22c2327c17deb6dfbec2", "dest": "/var/lib/kolla/config_files/nova-migration-target.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova-migration-target.json", "value": {"command": "/usr/sbin/sshd -D -p 2022", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ssh/", "owner": "root", "perm": "0600", "source": "/host-ssh/ssh_host_*_key"}]}}, "md5sum": "161558d57b182ca70c6f9bbd7fcbda8a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 258, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760636.24-140469584496583/source", "state": "file", "uid": 0} >2018-08-20 06:23:56,897 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/swift_proxy_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760636.38-249502220162914/source", "state": "file", "uid": 0} >2018-08-20 06:23:57,256 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf'}, 'key': u'/var/lib/kolla/config_files/nova_virtlogd.json'}) => {"changed": true, "checksum": "8bbfe195e54ddfe481aaad9744174f7344d49681", "dest": "/var/lib/kolla/config_files/nova_virtlogd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_virtlogd.json", "value": {"command": "/usr/sbin/virtlogd --config /etc/libvirt/virtlogd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "786b962e2df778e3ce02b185ef93deac", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 193, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760636.74-197051209616838/source", "state": "file", "uid": 0} >2018-08-20 06:23:57,407 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-auditor /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_auditor.json'}) => {"changed": true, "checksum": "413730fbf3f7935085cfda60cbc1535d8bce0caf", "dest": "/var/lib/kolla/config_files/swift_account_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_auditor.json", "value": {"command": "/usr/bin/swift-account-auditor /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "dfccd947a56ceb6fa2b71c400281a365", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760636.91-223190478907928/source", "state": "file", "uid": 0} >2018-08-20 06:23:57,759 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760637.26-254206722467240/source", "state": "file", "uid": 0} >2018-08-20 06:23:57,916 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-replicator /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_replicator.json'}) => {"changed": true, "checksum": "2bf5ca66cb377c9fa3e6880f8b078d1312470cde", "dest": "/var/lib/kolla/config_files/swift_account_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_replicator.json", "value": {"command": "/usr/bin/swift-account-replicator /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d4a857b7e18f40f1cc1e6fd265c89770", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760637.41-47798778839005/source", "state": "file", "uid": 0} >2018-08-20 06:23:58,262 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/nova-compute ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}, {'owner': u'nova:nova', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/nova_compute.json'}) => {"changed": true, "checksum": "36b137044b0d21045af74db4b85d6847bbd5cdf7", "dest": "/var/lib/kolla/config_files/nova_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_compute.json", "value": {"command": "/usr/bin/nova-compute ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}, {"owner": "nova:nova", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "da9ad479a10bc1d72f762413824e6639", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 577, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760637.77-108854702848084/source", "state": "file", "uid": 0} >2018-08-20 06:23:58,415 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-notifier', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_notifier.json'}) => {"changed": true, "checksum": "e01d19d7f7cff24dfcc0d132b7d8ceabba199142", "dest": "/var/lib/kolla/config_files/aodh_notifier.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_notifier.json", "value": {"command": "/usr/bin/aodh-notifier", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "5d4a748030a9a7476ccbd8902fb654fc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760637.92-218623661177841/source", "state": "file", "uid": 0} >2018-08-20 06:23:58,769 p=1013 u=mistral | changed: [compute-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_compute.json'}) => {"changed": true, "checksum": "4b3e97fcd87fd70b35934d1ef908747f302a4d11", "dest": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_compute.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d91832a36a0ad3616a4e78c1af7d0db5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760638.27-204560794847297/source", "state": "file", "uid": 0} >2018-08-20 06:23:58,918 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-scheduler ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_scheduler.json'}) => {"changed": true, "checksum": "23416bae23a2c08d2c534f76d19f8c4bad40ee92", "dest": "/var/lib/kolla/config_files/nova_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_scheduler.json", "value": {"command": "/usr/bin/nova-scheduler ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "d00e4198d95dede3f0b6ac351d57a982", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760638.42-259812238543458/source", "state": "file", "uid": 0} >2018-08-20 06:23:59,403 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cron.json'}) => {"changed": true, "checksum": "a13a92b47f931e2e89d7e4bf5057a4307ab9cd45", "dest": "/var/lib/kolla/config_files/heat_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "e671c4783cc86fb2ad300fcd11b2f99b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760638.93-257961074310810/source", "state": "file", "uid": 0} >2018-08-20 06:23:59,894 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/certs/neutron.crt'}, {'owner': u'neutron:neutron', 'path': u'/etc/pki/tls/private/neutron.key'}]}, 'key': u'/var/lib/kolla/config_files/neutron_dhcp.json'}) => {"changed": true, "checksum": "da289f102f641cdd0a02df41c443d7d8387741a5", "dest": "/var/lib/kolla/config_files/neutron_dhcp.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_dhcp.json", "value": {"command": "/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/dhcp-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/certs/neutron.crt"}, {"owner": "neutron:neutron", "path": "/etc/pki/tls/private/neutron.key"}]}}, "md5sum": "c5975567082648a9da814c433c49f2d6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 875, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760639.41-56292556446204/source", "state": "file", "uid": 0} >2018-08-20 06:24:00,403 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg', 'permissions': [{'owner': u'haproxy:haproxy', 'path': u'/var/lib/haproxy', 'recurse': True}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/certs/haproxy/*', 'optional': True, 'perm': u'0600'}, {'owner': u'haproxy:haproxy', 'path': u'/etc/pki/tls/private/haproxy/*', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/haproxy.json'}) => {"changed": true, "checksum": "0801385cb9292b3b6eb8440166435242bd90e288", "dest": "/var/lib/kolla/config_files/haproxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/haproxy.json", "value": {"command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg", "config_files": [{"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "haproxy:haproxy", "path": "/var/lib/haproxy", "recurse": true}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/certs/haproxy/*", "perm": "0600"}, {"optional": true, "owner": "haproxy:haproxy", "path": "/etc/pki/tls/private/haproxy/*", "perm": "0600"}]}}, "md5sum": "a2742f7abd50bb0af0a4ba55b2f1f4ff", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 648, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760639.91-265357705889643/source", "state": "file", "uid": 0} >2018-08-20 06:24:00,905 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api_cron.json'}) => {"changed": true, "checksum": "c1a1552a71f4daefebff5234f9d8ba71f4c64d76", "dest": "/var/lib/kolla/config_files/nova_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "6b8ef057a2e5539eacd9f29fc4b94036", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 240, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760640.41-253818232539953/source", "state": "file", "uid": 0} >2018-08-20 06:24:01,448 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_db_sync.json'}) => {"changed": true, "checksum": "a6d2eb62af2f11437c704d13adf72d498324ce2a", "dest": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_db_sync.json", "value": {"command": "/usr/bin/bootstrap_host_exec gnocchi_api /usr/bin/gnocchi-upgrade --sacks-number=128", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "d586f0c2ff043bece10efff986d635a3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 531, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760640.91-231760834675896/source", "state": "file", "uid": 0} >2018-08-20 06:24:01,949 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-reaper /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_reaper.json'}) => {"changed": true, "checksum": "b061cf7478060add5d079aafaeae81b445251a8f", "dest": "/var/lib/kolla/config_files/swift_account_reaper.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_reaper.json", "value": {"command": "/usr/bin/swift-account-reaper /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "0f3bbe74ca95c8cca321ee32e2aff7d1", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760641.46-63023406321137/source", "state": "file", "uid": 0} >2018-08-20 06:24:02,429 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-engine.json'}) => {"changed": true, "checksum": "b7397fff831b47db0b6111663d816a64a389cb25", "dest": "/var/lib/kolla/config_files/sahara-engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-engine.json", "value": {"command": "/usr/bin/sahara-engine --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "ac2c7a84fc46a1f1d128201ce5b67c2d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 360, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760641.96-24089313316710/source", "state": "file", "uid": 0} >2018-08-20 06:24:02,900 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'redis:redis', 'path': u'/var/run/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/lib/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/var/log/redis', 'recurse': True}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/certs/redis.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'redis:redis', 'path': u'/etc/pki/tls/private/redis.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/redis.json'}) => {"changed": true, "checksum": "66d6d6bd51aaa0c100cdfc7688267a4342c7859f", "dest": "/var/lib/kolla/config_files/redis.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "redis:redis", "path": "/var/run/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/lib/redis", "recurse": true}, {"owner": "redis:redis", "path": "/var/log/redis", "recurse": true}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/certs/redis.crt", "perm": "0600"}, {"optional": true, "owner": "redis:redis", "path": "/etc/pki/tls/private/redis.key", "perm": "0600"}]}}, "md5sum": "ceafff1d742633f8759bdb1af0e3ebd4", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 843, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760642.43-217823047658880/source", "state": "file", "uid": 0} >2018-08-20 06:24:03,375 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-novncproxy --web /usr/share/novnc/ ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_vnc_proxy.json'}) => {"changed": true, "checksum": "b64555136537c36af22340fb15f21f0e01ac3495", "dest": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_vnc_proxy.json", "value": {"command": "/usr/bin/nova-novncproxy --web /usr/share/novnc/ ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "557a4e9522f54cfbd6456516e67f4971", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 271, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760642.91-67831754184314/source", "state": "file", "uid": 0} >2018-08-20 06:24:03,877 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf', 'permissions': [{'owner': u'glance:glance', 'path': u'/var/lib/glance', 'recurse': True}, {'owner': u'glance:glance', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/glance_api.json'}) => {"changed": true, "checksum": "2a93405ac579e31c6e5732983f3d7dd8bed55b33", "dest": "/var/lib/kolla/config_files/glance_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api.json", "value": {"command": "/usr/bin/glance-api --config-file /usr/share/glance/glance-api-dist.conf --config-file /etc/glance/glance-api.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "glance:glance", "path": "/var/lib/glance", "recurse": true}, {"owner": "glance:glance", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "30c5fe40dffc304e7edeab4019e96e92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 556, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760643.38-266016816438769/source", "state": "file", "uid": 0} >2018-08-20 06:24:04,353 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-auditor /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_auditor.json'}) => {"changed": true, "checksum": "739f6562d3ea24561c6d8bcf37041a9eac928257", "dest": "/var/lib/kolla/config_files/swift_container_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_auditor.json", "value": {"command": "/usr/bin/swift-container-auditor /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b63816c7c08aef58249d13b65b387da6", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760643.88-118772745732743/source", "state": "file", "uid": 0} >2018-08-20 06:24:04,833 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-panko/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log', 'permissions': [{'owner': u'root:ceilometer', 'path': u'/etc/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_notification.json'}) => {"changed": true, "checksum": "98adef088b2ae2648ac88b812890957ec54eff13", "dest": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_notification.json", "value": {"command": "/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/agent-notification.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-panko/*"}], "permissions": [{"owner": "root:ceilometer", "path": "/etc/panko", "recurse": true}]}}, "md5sum": "4a38c9578181c292891f5f7bdb9f791b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 428, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760644.36-253651986772208/source", "state": "file", "uid": 0} >2018-08-20 06:24:05,314 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_expirer.json'}) => {"changed": true, "checksum": "ebbb7ee6895cea2b9278f33e888881d3d3f1a68a", "dest": "/var/lib/kolla/config_files/swift_object_expirer.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_expirer.json", "value": {"command": "/usr/bin/swift-object-expirer /etc/swift/object-expirer.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "e4bf891d8ffc9a015be201a6ef0d5abc", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760644.84-3229146813629/source", "state": "file", "uid": 0} >2018-08-20 06:24:05,790 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log'}, 'key': u'/var/lib/kolla/config_files/ceilometer_agent_central.json'}) => {"changed": true, "checksum": "53d52f7d52f0fb3da33de2c20414eb3248593fdd", "dest": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/ceilometer_agent_central.json", "value": {"command": "/usr/bin/ceilometer-polling --polling-namespaces central --logfile /var/log/ceilometer/central.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "2863f917d7ada51e9570fb53bb363eed", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 237, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760645.32-264563628962730/source", "state": "file", "uid": 0} >2018-08-20 06:24:06,280 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760645.8-97994676490544/source", "state": "file", "uid": 0} >2018-08-20 06:24:06,760 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf'}, 'key': u'/var/lib/kolla/config_files/swift_rsync.json'}) => {"changed": true, "checksum": "44a8f1a58092190d553d3f589cab9ae566f8dc81", "dest": "/var/lib/kolla/config_files/swift_rsync.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_rsync.json", "value": {"command": "/usr/bin/rsync --daemon --no-detach --config=/etc/rsyncd.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "886febadf691905adf0c129f3aa0197a", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 200, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760646.29-1115749188328/source", "state": "file", "uid": 0} >2018-08-20 06:24:07,231 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-account-server /etc/swift/account-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_account_server.json'}) => {"changed": true, "checksum": "279b64a7d6914d2a03c86c703f53e3d71b1daef1", "dest": "/var/lib/kolla/config_files/swift_account_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_account_server.json", "value": {"command": "/usr/bin/swift-account-server /etc/swift/account-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "b41d67c146c800142c5405fe5a0b332e", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 199, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760646.77-240820307323300/source", "state": "file", "uid": 0} >2018-08-20 06:24:07,728 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api_cron.json'}) => {"changed": true, "checksum": "06055a69fec2bc513b4c86ceb654a5fc29bd0866", "dest": "/var/lib/kolla/config_files/cinder_api_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "801aba1299d99bfd7e63f66ca7a4ba40", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760647.24-122599848950814/source", "state": "file", "uid": 0} >2018-08-20 06:24:08,204 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_proxy.json'}) => {"changed": true, "checksum": "a0874b803c5238a4eeb12b1265d5d1db93c0d3d4", "dest": "/var/lib/kolla/config_files/swift_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_proxy.json", "value": {"command": "/usr/bin/swift-proxy-server /etc/swift/proxy-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a38e4e3ae519b3b0824e19184e521b36", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 195, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760647.74-99366081865377/source", "state": "file", "uid": 0} >2018-08-20 06:24:08,702 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-updater /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_updater.json'}) => {"changed": true, "checksum": "8dbfc3669a6d79fb30702be502ced7501500480a", "dest": "/var/lib/kolla/config_files/swift_container_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_updater.json", "value": {"command": "/usr/bin/swift-container-updater /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "a697319d04392dc572dff6236144571f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 204, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760648.21-106714531292024/source", "state": "file", "uid": 0} >2018-08-20 06:24:09,163 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/xinetd -dontfork'}, 'key': u'/var/lib/kolla/config_files/clustercheck.json'}) => {"changed": true, "checksum": "3c87335a28b992f90769aea9ea62fb610f8236f1", "dest": "/var/lib/kolla/config_files/clustercheck.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/clustercheck.json", "value": {"command": "/usr/sbin/xinetd -dontfork", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "d74434e7b8bcaca0b227152346c13db8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 165, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760648.71-10151675054394/source", "state": "file", "uid": 0} >2018-08-20 06:24:09,667 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'mysql:mysql', 'path': u'/var/log/mysql', 'recurse': True}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/certs/mysql.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'mysql:mysql', 'path': u'/etc/pki/tls/private/mysql.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/mysql.json'}) => {"changed": true, "checksum": "b52f0d28ed1ac134c64994c08b3f2378e8dff494", "dest": "/var/lib/kolla/config_files/mysql.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/mysql.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "mysql:mysql", "path": "/var/log/mysql", "recurse": true}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/certs/mysql.crt", "perm": "0600"}, {"optional": true, "owner": "mysql:mysql", "path": "/etc/pki/tls/private/mysql.key", "perm": "0600"}]}}, "md5sum": "4d15ed291dbe96e88b9a128b0e5c99e9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 687, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760649.18-177030381145721/source", "state": "file", "uid": 0} >2018-08-20 06:24:10,170 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_placement.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_placement.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_placement.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760649.67-134543751281367/source", "state": "file", "uid": 0} >2018-08-20 06:24:10,652 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf', 'permissions': [{'owner': u'sahara:sahara', 'path': u'/var/lib/sahara', 'recurse': True}, {'owner': u'sahara:sahara', 'path': u'/var/log/sahara', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/sahara-api.json'}) => {"changed": true, "checksum": "fd070eb1bdc97442fddc24f503fe5e3251b89e28", "dest": "/var/lib/kolla/config_files/sahara-api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/sahara-api.json", "value": {"command": "/usr/bin/sahara-api --config-file /etc/sahara/sahara.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "sahara:sahara", "path": "/var/lib/sahara", "recurse": true}, {"owner": "sahara:sahara", "path": "/var/log/sahara", "recurse": true}]}}, "md5sum": "bd52668d37c227cc00c418bbe889ab90", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 357, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760650.18-139878088195538/source", "state": "file", "uid": 0} >2018-08-20 06:24:11,121 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_api.json'}) => {"changed": true, "checksum": "f4177197cb07127689ae10a60020efa3a5e0d457", "dest": "/var/lib/kolla/config_files/aodh_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "582326e52a94260e71a4a19dc4d75191", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760650.66-163904689790531/source", "state": "file", "uid": 0} >2018-08-20 06:24:11,596 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/crond -n', 'permissions': [{'owner': u'keystone:keystone', 'path': u'/var/log/keystone', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/keystone_cron.json'}) => {"changed": true, "checksum": "815ba71e0584cb12e7d40f794603c6bfb1800626", "dest": "/var/lib/kolla/config_files/keystone_cron.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/keystone_cron.json", "value": {"command": "/usr/sbin/crond -n", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "keystone:keystone", "path": "/var/log/keystone", "recurse": true}]}}, "md5sum": "b3b3bbd6499e09c424665311a5e66136", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 252, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760651.13-155320606697843/source", "state": "file", "uid": 0} >2018-08-20 06:24:12,082 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/neutron_server_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_server_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760651.6-168866044433030/source", "state": "file", "uid": 0} >2018-08-20 06:24:12,555 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-replicator /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_replicator.json'}) => {"changed": true, "checksum": "659d25615392d81b2f6bc001067232495de4d6ac", "dest": "/var/lib/kolla/config_files/swift_object_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_replicator.json", "value": {"command": "/usr/bin/swift-object-replicator /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "cdea8a372a87263d5fc44b482867a705", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 201, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760652.09-113701741631527/source", "state": "file", "uid": 0} >2018-08-20 06:24:13,047 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-conductor ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_conductor.json'}) => {"changed": true, "checksum": "01a54792c74d0ebd057e8d0f44e6e8e619283e62", "dest": "/var/lib/kolla/config_files/nova_conductor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_conductor.json", "value": {"command": "/usr/bin/nova-conductor ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "ccbba0ad7a926ceca2bf858b8a9cc376", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 246, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760652.56-116347972768382/source", "state": "file", "uid": 0} >2018-08-20 06:24:13,512 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_api_cfn.json'}) => {"changed": true, "checksum": "454582321236a137f78205f328bae190c02f06b0", "dest": "/var/lib/kolla/config_files/heat_api_cfn.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_api_cfn.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "c04ac0476ee6639fadf252b0e9d9649b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760653.05-1622685782537/source", "state": "file", "uid": 0} >2018-08-20 06:24:13,984 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-api-metadata ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_metadata.json'}) => {"changed": true, "checksum": "edb529183cc509ea82818edf4d88e3650b5ffc57", "dest": "/var/lib/kolla/config_files/nova_metadata.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_metadata.json", "value": {"command": "/usr/bin/nova-api-metadata ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "45129bd8b5b9aef067edb558a9fb2c68", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 249, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760653.52-60129178886569/source", "state": "file", "uid": 0} >2018-08-20 06:24:14,472 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/neutron_ovs_agent_launcher.sh', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_ovs_agent.json'}) => {"changed": true, "checksum": "bd1c4f0459f65e7f67a969a89c74a8b8cdcfd9f8", "dest": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_ovs_agent.json", "value": {"command": "/neutron_ovs_agent_launcher.sh", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "3599cf6b814b7c628c2887996ca46138", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 261, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760653.99-34433055298389/source", "state": "file", "uid": 0} >2018-08-20 06:24:14,967 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/libqb/force-filesystem-sockets', 'owner': u'root', 'perm': u'0644', 'source': u'/dev/null'}, {'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/', 'merge': True, 'optional': True, 'source': u'/var/lib/kolla/config_files/src-tls/*', 'preserve_properties': True}], 'command': u'/usr/sbin/pacemaker_remoted', 'permissions': [{'owner': u'rabbitmq:rabbitmq', 'path': u'/var/lib/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/var/log/rabbitmq', 'recurse': True}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/certs/rabbitmq.crt', 'optional': True, 'perm': u'0600'}, {'owner': u'rabbitmq:rabbitmq', 'path': u'/etc/pki/tls/private/rabbitmq.key', 'optional': True, 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/rabbitmq.json'}) => {"changed": true, "checksum": "205ddacf194881a04c54779e3049b3c59ef6c4af", "dest": "/var/lib/kolla/config_files/rabbitmq.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/rabbitmq.json", "value": {"command": "/usr/sbin/pacemaker_remoted", "config_files": [{"dest": "/etc/libqb/force-filesystem-sockets", "owner": "root", "perm": "0644", "source": "/dev/null"}, {"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/", "merge": true, "optional": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-tls/*"}], "permissions": [{"owner": "rabbitmq:rabbitmq", "path": "/var/lib/rabbitmq", "recurse": true}, {"owner": "rabbitmq:rabbitmq", "path": "/var/log/rabbitmq", "recurse": true}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/certs/rabbitmq.crt", "perm": "0600"}, {"optional": true, "owner": "rabbitmq:rabbitmq", "path": "/etc/pki/tls/private/rabbitmq.key", "perm": "0600"}]}}, "md5sum": "1097dade2a2355fd51207668004d093d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 792, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760654.48-277612130132811/source", "state": "file", "uid": 0} >2018-08-20 06:24:15,451 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/nova-consoleauth ', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_consoleauth.json'}) => {"changed": true, "checksum": "a960878859377dfae6334d9b7eaa9f554ab31798", "dest": "/var/lib/kolla/config_files/nova_consoleauth.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_consoleauth.json", "value": {"command": "/usr/bin/nova-consoleauth ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "2a66fc646aae3e5913e0598ccef3881f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 248, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760654.97-30891937958263/source", "state": "file", "uid": 0} >2018-08-20 06:24:15,934 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-updater /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_updater.json'}) => {"changed": true, "checksum": "4f7a34f38afe301f885e25eb10225c461ab1d0b1", "dest": "/var/lib/kolla/config_files/swift_object_updater.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_updater.json", "value": {"command": "/usr/bin/swift-object-updater /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "71a7e788486d505cfec645da0ac337cd", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760655.46-165148509626659/source", "state": "file", "uid": 0} >2018-08-20 06:24:16,407 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_api.json'}) => {"changed": true, "checksum": "5a73d3b7ef652341120c9298683d3a26f3fb668b", "dest": "/var/lib/kolla/config_files/neutron_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_api.json", "value": {"command": "/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file=/var/log/neutron/server.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}]}}, "md5sum": "c48346aa3f8c096826ebab378db9dfb9", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 549, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760655.94-179597816631823/source", "state": "file", "uid": 0} >2018-08-20 06:24:16,893 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_scheduler.json'}) => {"changed": true, "checksum": "9ec49193a63036ecf32a1479eabdac05dcab06e0", "dest": "/var/lib/kolla/config_files/cinder_scheduler.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_scheduler.json", "value": {"command": "/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "93e9da0d08550be0ed30576cefdfbfbb", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 340, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760656.41-276609365776982/source", "state": "file", "uid": 0} >2018-08-20 06:24:17,386 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-metricd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_metricd.json'}) => {"changed": true, "checksum": "c8763a8c16702042afe553b54212340d800e1509", "dest": "/var/lib/kolla/config_files/gnocchi_metricd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_metricd.json", "value": {"command": "/usr/bin/gnocchi-metricd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "db9bd25aa2fcd2845d442869e986e7d8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 471, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760656.9-208272660479587/source", "state": "file", "uid": 0} >2018-08-20 06:24:17,856 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_metadata_agent.json'}) => {"changed": true, "checksum": "fe01b9d48d08f239bbf9acf7e2a1492397180c8e", "dest": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_metadata_agent.json", "value": {"command": "/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file=/var/log/neutron/metadata-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "a26f6acfc823d6e2e5b34367b859c8fa", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 617, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760657.39-42943473661466/source", "state": "file", "uid": 0} >2018-08-20 06:24:18,311 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-replicator /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_replicator.json'}) => {"changed": true, "checksum": "a418eddca731078cfd8fe2fda7ee64d9ffaf7dda", "dest": "/var/lib/kolla/config_files/swift_container_replicator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_replicator.json", "value": {"command": "/usr/bin/swift-container-replicator /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "930bbe0f8c13b55f664fb3a89dfa1613", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 207, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760657.86-274653405714862/source", "state": "file", "uid": 0} >2018-08-20 06:24:18,793 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ', 'permissions': [{'owner': u'heat:heat', 'path': u'/var/log/heat', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/heat_engine.json'}) => {"changed": true, "checksum": "fe3989178a2ea434bae6dfd64b04423e3ea005bc", "dest": "/var/lib/kolla/config_files/heat_engine.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/heat_engine.json", "value": {"command": "/usr/bin/heat-engine --config-file /usr/share/heat/heat-dist.conf --config-file /etc/heat/heat.conf ", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "heat:heat", "path": "/var/log/heat", "recurse": true}]}}, "md5sum": "aee05ebc54399dde3dfc3577c3431a92", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 322, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760658.32-79327559389231/source", "state": "file", "uid": 0} >2018-08-20 06:24:19,245 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'nova:nova', 'path': u'/var/log/nova', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/nova_api.json'}) => {"changed": true, "checksum": "d061b71e9106733354c297cbb7b327a22e476de5", "dest": "/var/lib/kolla/config_files/nova_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/nova_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "nova:nova", "path": "/var/log/nova", "recurse": true}]}}, "md5sum": "941db485b7079f2f0e008e1bdff8e45f", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 250, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760658.78-186446290376753/source", "state": "file", "uid": 0} >2018-08-20 06:24:19,712 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-server /etc/swift/object-server.conf', 'permissions': [{'owner': u'swift:swift', 'path': u'/var/cache/swift', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/swift_object_server.json'}) => {"changed": true, "checksum": "460cdcfbcfac45a30b03df89ac84d2f34db64d72", "dest": "/var/lib/kolla/config_files/swift_object_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_server.json", "value": {"command": "/usr/bin/swift-object-server /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "swift:swift", "path": "/var/cache/swift", "recurse": true}]}}, "md5sum": "b00c233fd2cd32c68e429e42918b8245", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 285, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760659.25-11894867259005/source", "state": "file", "uid": 0} >2018-08-20 06:24:20,189 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'stunnel /etc/stunnel/stunnel.conf'}, 'key': u'/var/lib/kolla/config_files/redis_tls_proxy.json'}) => {"changed": true, "checksum": "80800f9f267aaf3497499af70b7945e3b6ae771b", "dest": "/var/lib/kolla/config_files/redis_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/redis_tls_proxy.json", "value": {"command": "stunnel /etc/stunnel/stunnel.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "c45d2764863cc585b994d432412ff9e8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 172, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760659.72-139335047494419/source", "state": "file", "uid": 0} >2018-08-20 06:24:20,666 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_api.json'}) => {"changed": true, "checksum": "39f33531116fbcba7a5d9c1cbbc32f4af5e6b981", "dest": "/var/lib/kolla/config_files/gnocchi_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "5e924ffe736d942bf904a791bf5b5af2", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 475, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760660.2-270827637462334/source", "state": "file", "uid": 0} >2018-08-20 06:24:21,151 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_api.json'}) => {"changed": true, "checksum": "7f36445e4c6eb403ce919ca3adee771d4cb3bcce", "dest": "/var/lib/kolla/config_files/cinder_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "bb3e2e5741eb3e5b6c53da835e66d00d", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 256, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760660.67-7623672405851/source", "state": "file", "uid": 0} >2018-08-20 06:24:21,643 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}, {'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf', 'permissions': [{'owner': u'cinder:cinder', 'path': u'/var/log/cinder', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/cinder_volume.json'}) => {"changed": true, "checksum": "e800a0e1c86f8fa7a41efbf24ce38f48a458ba51", "dest": "/var/lib/kolla/config_files/cinder_volume.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/cinder_volume.json", "value": {"command": "/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}, {"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}], "permissions": [{"owner": "cinder:cinder", "path": "/var/log/cinder", "recurse": true}]}}, "md5sum": "a85ec43ba623807ac022c04663fa68f5", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 579, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760661.16-26646127571418/source", "state": "file", "uid": 0} >2018-08-20 06:24:22,105 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'panko:panko', 'path': u'/var/log/panko', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/panko_api.json'}) => {"changed": true, "checksum": "2db8f01174b9c2aa3a180add472b54891aed5cd6", "dest": "/var/lib/kolla/config_files/panko_api.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/panko_api.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "panko:panko", "path": "/var/log/panko", "recurse": true}]}}, "md5sum": "7d9530934c938a4c96f71797957f7ca8", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 253, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760661.65-48578805257418/source", "state": "file", "uid": 0} >2018-08-20 06:24:22,545 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-object-auditor /etc/swift/object-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_object_auditor.json'}) => {"changed": true, "checksum": "fbcdad9219733b81ad969426553906c1a8648897", "dest": "/var/lib/kolla/config_files/swift_object_auditor.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_object_auditor.json", "value": {"command": "/usr/bin/swift-object-auditor /etc/swift/object-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "45f7348541b64a76aec07477ea1d7358", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 198, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760662.11-109357899561938/source", "state": "file", "uid": 0} >2018-08-20 06:24:22,992 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log', 'permissions': [{'owner': u'neutron:neutron', 'path': u'/var/log/neutron', 'recurse': True}, {'owner': u'neutron:neutron', 'path': u'/var/lib/neutron', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/neutron_l3_agent.json'}) => {"changed": true, "checksum": "cd233477dc9defd8028ac1a8fe736b8c9fcde9f8", "dest": "/var/lib/kolla/config_files/neutron_l3_agent.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/neutron_l3_agent.json", "value": {"command": "/usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/l3-agent.log", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "neutron:neutron", "path": "/var/log/neutron", "recurse": true}, {"owner": "neutron:neutron", "path": "/var/lib/neutron", "recurse": true}]}}, "md5sum": "b47a8dc2601f0e1c404b9009d1c99c32", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 634, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760662.55-58967129115935/source", "state": "file", "uid": 0} >2018-08-20 06:24:23,466 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-listener', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_listener.json'}) => {"changed": true, "checksum": "a7135286aba5eb111dc77c913fc1f7dc0977e783", "dest": "/var/lib/kolla/config_files/aodh_listener.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_listener.json", "value": {"command": "/usr/bin/aodh-listener", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "ff2b7ae2bb8061a36a8223f5c34a970b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 244, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760663.0-81441545720239/source", "state": "file", "uid": 0} >2018-08-20 06:24:23,929 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/swift-container-server /etc/swift/container-server.conf'}, 'key': u'/var/lib/kolla/config_files/swift_container_server.json'}) => {"changed": true, "checksum": "1f5cc060becbca7be3515f39537993b91e109a6d", "dest": "/var/lib/kolla/config_files/swift_container_server.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/swift_container_server.json", "value": {"command": "/usr/bin/swift-container-server /etc/swift/container-server.conf", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "59a9944c2c3c07fec0293d2efd7d8082", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 203, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760663.47-48675357769780/source", "state": "file", "uid": 0} >2018-08-20 06:24:24,395 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/bin/aodh-evaluator', 'permissions': [{'owner': u'aodh:aodh', 'path': u'/var/log/aodh', 'recurse': True}]}, 'key': u'/var/lib/kolla/config_files/aodh_evaluator.json'}) => {"changed": true, "checksum": "596ee1b7f45471d04a0bc3d985f82ad722631b98", "dest": "/var/lib/kolla/config_files/aodh_evaluator.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/aodh_evaluator.json", "value": {"command": "/usr/bin/aodh-evaluator", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "aodh:aodh", "path": "/var/log/aodh", "recurse": true}]}}, "md5sum": "94c5432632bf2acca69de0063414183b", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 245, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760663.94-214528796206797/source", "state": "file", "uid": 0} >2018-08-20 06:24:24,875 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND'}, 'key': u'/var/lib/kolla/config_files/glance_api_tls_proxy.json'}) => {"changed": true, "checksum": "8dec7e00a25c01fc0483b06f5e3d31c64b93ec3e", "dest": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/glance_api_tls_proxy.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}]}}, "md5sum": "1af9170c02e7b1819b37b8d71e67dff0", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 167, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760664.4-255862638647599/source", "state": "file", "uid": 0} >2018-08-20 06:24:25,352 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/etc/iscsi/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-iscsid/*', 'preserve_properties': True}], 'command': u'/usr/sbin/iscsid -f'}, 'key': u'/var/lib/kolla/config_files/iscsid.json'}) => {"changed": true, "checksum": "40f9ceb4dd2fc8e9c51bf5152a0fa8e1d16d9137", "dest": "/var/lib/kolla/config_files/iscsid.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/iscsid.json", "value": {"command": "/usr/sbin/iscsid -f", "config_files": [{"dest": "/etc/iscsi/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-iscsid/*"}]}}, "md5sum": "9cd3c2dc0153b127d70141dadfabd12c", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 175, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760664.88-22884823678344/source", "state": "file", "uid": 0} >2018-08-20 06:24:25,862 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}, {'dest': u'/etc/ceph/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src-ceph/', 'preserve_properties': True}], 'command': u'/usr/bin/gnocchi-statsd', 'permissions': [{'owner': u'gnocchi:gnocchi', 'path': u'/var/log/gnocchi', 'recurse': True}, {'owner': u'gnocchi:gnocchi', 'path': u'/etc/ceph/ceph.client.openstack.keyring', 'perm': u'0600'}]}, 'key': u'/var/lib/kolla/config_files/gnocchi_statsd.json'}) => {"changed": true, "checksum": "1a38774f0fed561a8f1ad8c7f0a976a71a7f7008", "dest": "/var/lib/kolla/config_files/gnocchi_statsd.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/gnocchi_statsd.json", "value": {"command": "/usr/bin/gnocchi-statsd", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}, {"dest": "/etc/ceph/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src-ceph/"}], "permissions": [{"owner": "gnocchi:gnocchi", "path": "/var/log/gnocchi", "recurse": true}, {"owner": "gnocchi:gnocchi", "path": "/etc/ceph/ceph.client.openstack.keyring", "perm": "0600"}]}}, "md5sum": "b98425b2f26d4e30448a72685b1f89ad", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 470, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760665.36-24833280255076/source", "state": "file", "uid": 0} >2018-08-20 06:24:26,437 p=1013 u=mistral | changed: [controller-0] => (item={'value': {'config_files': [{'dest': u'/', 'merge': True, 'source': u'/var/lib/kolla/config_files/src/*', 'preserve_properties': True}], 'command': u'/usr/sbin/httpd -DFOREGROUND', 'permissions': [{'owner': u'apache:apache', 'path': u'/var/log/horizon/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/etc/openstack-dashboard/', 'recurse': True}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/', 'recurse': False}, {'owner': u'apache:apache', 'path': u'/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/', 'recurse': False}]}, 'key': u'/var/lib/kolla/config_files/horizon.json'}) => {"changed": true, "checksum": "fc55910103403d0bb92e62e940dbd536aff43f84", "dest": "/var/lib/kolla/config_files/horizon.json", "gid": 0, "group": "root", "item": {"key": "/var/lib/kolla/config_files/horizon.json", "value": {"command": "/usr/sbin/httpd -DFOREGROUND", "config_files": [{"dest": "/", "merge": true, "preserve_properties": true, "source": "/var/lib/kolla/config_files/src/*"}], "permissions": [{"owner": "apache:apache", "path": "/var/log/horizon/", "recurse": true}, {"owner": "apache:apache", "path": "/etc/openstack-dashboard/", "recurse": true}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/", "recurse": false}, {"owner": "apache:apache", "path": "/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.d/", "recurse": false}]}}, "md5sum": "77504b6ea1f544f3c70dbc4115bfc354", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 587, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760665.87-182367626264724/source", "state": "file", "uid": 0} >2018-08-20 06:24:26,567 p=1013 u=mistral | TASK [Clean /var/lib/docker-puppet/docker-puppet-tasks*.json files] ************ >2018-08-20 06:24:26,567 p=1013 u=mistral | Monday 20 August 2018 06:24:26 -0400 (0:00:32.027) 0:05:08.597 ********* >2018-08-20 06:24:26,578 p=1013 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-08-20 06:24:26,601 p=1013 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-08-20 06:24:26,624 p=1013 u=mistral | [WARNING]: Unable to find '/var/lib/docker-puppet' in expected paths (use >-vvvvv to see paths) > >2018-08-20 06:24:26,651 p=1013 u=mistral | TASK [Write docker-puppet-tasks json files] ************************************ >2018-08-20 06:24:26,651 p=1013 u=mistral | Monday 20 August 2018 06:24:26 -0400 (0:00:00.084) 0:05:08.681 ********* >2018-08-20 06:24:27,184 p=1013 u=mistral | changed: [controller-0] => (item={'value': [{'puppet_tags': u'keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain', 'config_volume': u'keystone_init_tasks', 'step_config': u'include ::tripleo::profile::base::keystone', 'config_image': u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2'}], 'key': u'step_3'}) => {"changed": true, "checksum": "71607f1b68d186138364b32eb259dceb1ad248a9", "dest": "/var/lib/docker-puppet/docker-puppet-tasks3.json", "gid": 0, "group": "root", "item": {"key": "step_3", "value": [{"config_image": "192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", "config_volume": "keystone_init_tasks", "puppet_tags": "keystone_config,keystone_domain_config,keystone_endpoint,keystone_identity_provider,keystone_paste_ini,keystone_role,keystone_service,keystone_tenant,keystone_user,keystone_user_role,keystone_domain", "step_config": "include ::tripleo::profile::base::keystone"}]}, "md5sum": "4b20fd5b01edfb5ef4ddcd686a3cebe3", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:var_lib_t:s0", "size": 397, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760666.71-182273368872966/source", "state": "file", "uid": 0} >2018-08-20 06:24:27,209 p=1013 u=mistral | TASK [Set host puppet debugging fact string] *********************************** >2018-08-20 06:24:27,209 p=1013 u=mistral | Monday 20 August 2018 06:24:27 -0400 (0:00:00.558) 0:05:09.239 ********* >2018-08-20 06:24:27,240 p=1013 u=mistral | skipping: [controller-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:24:27,270 p=1013 u=mistral | skipping: [compute-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:24:27,288 p=1013 u=mistral | skipping: [ceph-0] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:24:27,311 p=1013 u=mistral | TASK [Write the config_step hieradata] ***************************************** >2018-08-20 06:24:27,311 p=1013 u=mistral | Monday 20 August 2018 06:24:27 -0400 (0:00:00.101) 0:05:09.341 ********* >2018-08-20 06:24:27,923 p=1013 u=mistral | changed: [controller-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760667.35-122305308219716/source", "state": "file", "uid": 0} >2018-08-20 06:24:27,950 p=1013 u=mistral | changed: [compute-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760667.38-194516939147379/source", "state": "file", "uid": 0} >2018-08-20 06:24:27,955 p=1013 u=mistral | changed: [ceph-0] => {"changed": true, "checksum": "dfdcc7695edd230e7a2c06fc7b739bfa56506d8f", "dest": "/etc/puppet/hieradata/config_step.json", "gid": 0, "group": "root", "md5sum": "f0ef53dcc6eb8440334b1ebaa90bfd63", "mode": "0600", "owner": "root", "secontext": "system_u:object_r:puppet_etc_t:s0", "size": 11, "src": "/home/tripleo-admin/.ansible/tmp/ansible-tmp-1534760667.43-118569930211453/source", "state": "file", "uid": 0} >2018-08-20 06:24:27,982 p=1013 u=mistral | TASK [Run puppet host configuration for step 1] ******************************** >2018-08-20 06:24:27,982 p=1013 u=mistral | Monday 20 August 2018 06:24:27 -0400 (0:00:00.670) 0:05:10.012 ********* >2018-08-20 06:24:43,186 p=1013 u=mistral | changed: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-08-20 06:24:46,626 p=1013 u=mistral | changed: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-08-20 06:25:53,114 p=1013 u=mistral | changed: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true} >2018-08-20 06:25:53,140 p=1013 u=mistral | TASK [Debug output for task which failed: Run puppet host configuration for step 1] *** >2018-08-20 06:25:53,140 p=1013 u=mistral | Monday 20 August 2018 06:25:53 -0400 (0:01:25.157) 0:06:35.170 ********* >2018-08-20 06:25:53,227 p=1013 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.87 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Controller1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/File[/etc/systemd/system/resource-agents-deps.target.wants]/ensure: created", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}7e0ae873809f218f93b013ccbf092ba5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Pacemaker/Systemd::Unit_file[docker.service]/File[/etc/systemd/system/resource-agents-deps.target.wants/docker.service]/ensure: created", > "Notice: /Stage[main]/Pacemaker::Service/Service[pcsd]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/password: changed password", > "Notice: /Stage[main]/Pacemaker::Corosync/User[hacluster]/groups: groups changed '' to ['haclient']", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[reauthenticate-across-all-nodes]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Create Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/owner: owner changed 'root' to 'hacluster'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/group: group changed 'root' to 'haclient'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker]/mode: mode changed '0755' to '0750'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/content: content changed '{md5}a7339d305c611d9fbdd3927992a47bac' to '{md5}a496e81bbdaadac2832bc9b9d91cdeb0'", > "Notice: /Stage[main]/Pacemaker::Corosync/File[etc-pacemaker-authkey]/mode: mode changed '0400' to '0640'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[Start Cluster tripleo_cluster]/returns: executed successfully", > "Notice: /Stage[main]/Pacemaker::Service/Service[corosync]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Service/Service[pacemaker]/enable: enable changed 'false' to 'true'", > "Notice: /Stage[main]/Pacemaker::Corosync/Exec[wait-for-settle]/returns: executed successfully", > "Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Pacemaker::Stonith/Pacemaker::Property[Disable STONITH]/Pcmk_property[property--stonith-enabled]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[aodh_api]/Tripleo::Firewall::Rule[128 aodh-api]/Firewall[128 aodh-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mgr]/Tripleo::Firewall::Rule[113 ceph_mgr]/Firewall[113 ceph_mgr ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_mon]/Tripleo::Firewall::Rule[110 ceph_mon]/Firewall[110 ceph_mon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_api]/Tripleo::Firewall::Rule[119 cinder]/Firewall[119 cinder ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[cinder_volume]/Tripleo::Firewall::Rule[120 iscsi initiator]/Firewall[120 iscsi initiator ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[glance_api]/Tripleo::Firewall::Rule[112 glance_api]/Firewall[112 glance_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_api]/Tripleo::Firewall::Rule[129 gnocchi-api]/Firewall[129 gnocchi-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[gnocchi_statsd]/Tripleo::Firewall::Rule[140 gnocchi-statsd]/Firewall[140 gnocchi-statsd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[haproxy]/Tripleo::Firewall::Rule[107 haproxy stats]/Firewall[107 haproxy stats ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api]/Tripleo::Firewall::Rule[125 heat_api]/Firewall[125 heat_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[heat_api_cfn]/Tripleo::Firewall::Rule[125 heat_cfn]/Firewall[125 heat_cfn ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[horizon]/Tripleo::Firewall::Rule[127 horizon]/Firewall[127 horizon ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[keystone]/Tripleo::Firewall::Rule[111 keystone]/Firewall[111 keystone ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[memcached]/Tripleo::Firewall::Rule[121 memcached]/Firewall[121 memcached ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[mysql]/Tripleo::Firewall::Rule[104 mysql galera-bundle]/Firewall[104 mysql galera-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_api]/Tripleo::Firewall::Rule[114 neutron api]/Firewall[114 neutron api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[115 neutron dhcp input]/Firewall[115 neutron dhcp input ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_dhcp]/Tripleo::Firewall::Rule[116 neutron dhcp output]/Firewall[116 neutron dhcp output ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_l3]/Tripleo::Firewall::Rule[106 neutron_l3 vrrp]/Firewall[106 neutron_l3 vrrp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_api]/Tripleo::Firewall::Rule[113 nova_api]/Firewall[113 nova_api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_placement]/Tripleo::Firewall::Rule[138 nova_placement]/Firewall[138 nova_placement ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_vnc_proxy]/Tripleo::Firewall::Rule[137 nova_vnc_proxy]/Firewall[137 nova_vnc_proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[130 pacemaker tcp]/Firewall[130 pacemaker tcp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[pacemaker]/Tripleo::Firewall::Rule[131 pacemaker udp]/Firewall[131 pacemaker udp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[panko_api]/Tripleo::Firewall::Rule[140 panko-api]/Firewall[140 panko-api ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[oslo_messaging_rpc]/Tripleo::Firewall::Rule[109 rabbitmq-bundle]/Firewall[109 rabbitmq-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[redis]/Tripleo::Firewall::Rule[108 redis-bundle]/Firewall[108 redis-bundle ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[sahara_api]/Tripleo::Firewall::Rule[132 sahara]/Firewall[132 sahara ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_proxy]/Tripleo::Firewall::Rule[122 swift proxy]/Firewall[122 swift proxy ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 74.00 seconds", > "Changes:", > " Total: 169", > "Events:", > " Success: 169", > "Resources:", > " Changed: 165", > " Out of sync: 165", > " Total: 216", > " Restarted: 5", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " File line: 0.00", > " Package manifest: 0.00", > " Augeas: 0.03", > " User: 0.05", > " Sysctl: 0.07", > " File: 0.21", > " Sysctl runtime: 0.27", > " Package: 0.40", > " Pcmk property: 1.00", > " Firewall: 14.64", > " Last run: 1534760752", > " Service: 2.38", > " Config retrieval: 3.30", > " Exec: 51.85", > " Concat fragment: 0.00", > " Total: 74.20", > "Version:", > " Config: 1534760675", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-08-20 06:25:53,260 p=1013 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.85 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_Compute1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Tuned/Exec[tuned-adm]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}7e0ae873809f218f93b013ccbf092ba5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[118 neutron vxlan networks]/Firewall[118 neutron vxlan networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[neutron_ovs_agent]/Tripleo::Firewall::Rule[136 neutron gre networks]/Firewall[136 neutron gre networks ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_libvirt]/Tripleo::Firewall::Rule[200 nova_libvirt]/Firewall[200 nova_libvirt ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[nova_migration_target]/Tripleo::Firewall::Rule[113 nova_migration_target]/Firewall[113 nova_migration_target ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 9.52 seconds", > "Changes:", > " Total: 99", > "Events:", > " Success: 99", > "Resources:", > " Total: 141", > " Restarted: 3", > " Out of sync: 99", > " Changed: 99", > "Time:", > " Filebucket: 0.00", > " Concat file: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Anchor: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.11", > " File: 0.19", > " Package: 0.25", > " Sysctl runtime: 0.26", > " Service: 1.18", > " Total: 10.48", > " Last run: 1534760686", > " Config retrieval: 2.23", > " Firewall: 2.48", > " Exec: 3.75", > " Concat fragment: 0.00", > "Version:", > " Config: 1534760674", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-08-20 06:25:53,953 p=1013 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Scope(Class[Tripleo::Firewall::Post]): At this stage, all network traffic is blocked.", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 1.77 seconds", > "Notice: /Stage[main]/Main/Package_manifest[/var/lib/tripleo/installed-packages/overcloud_CephStorage1]/ensure: created", > "Notice: /Stage[main]/Certmonger/Service[certmonger]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Certmonger::Ca::Local/Exec[extract-and-trust-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Time::Ntp/Service[chronyd]/ensure: ensure changed 'running' to 'stopped'", > "Notice: /Stage[main]/Ntp::Config/File[/etc/ntp.conf]/content: content changed '{md5}913c85f0fde85f83c2d6c030ecf259e9' to '{md5}537f072fe8f462b20e5e88f9121550b2'", > "Notice: /Stage[main]/Ntp::Service/Service[ntp]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Timezone/File[/etc/localtime]/content: content changed '{md5}e4ca381035a34b7a852184cc0dd89baa' to '{md5}c79354b8dbee09e62bbc3fb544853283'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/Service[ip6tables]/ensure: ensure changed 'stopped' to 'running'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/File[/etc/pki/ca-trust/source/anchors/undercloud-ca.pem]/ensure: defined content as '{md5}7e0ae873809f218f93b013ccbf092ba5'", > "Notice: /Stage[main]/Tripleo::Trusted_cas/Tripleo::Trusted_ca[undercloud-ca]/Exec[trust-ca-undercloud-ca]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack]/File[/etc/sysconfig/modules/nf_conntrack.modules]/ensure: defined content as '{md5}69dc79067bb7ee8d7a8a12176ceddb02'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/Exec[modprobe nf_conntrack_proto_sctp]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Kmod::Load[nf_conntrack_proto_sctp]/File[/etc/sysconfig/modules/nf_conntrack_proto_sctp.modules]/ensure: defined content as '{md5}7dfc614157ed326e9943593a7aca37c9'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl[fs.inotify.max_user_instances]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.inotify.max_user_instances]/Sysctl_runtime[fs.inotify.max_user_instances]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[fs.suid_dumpable]/Sysctl[fs.suid_dumpable]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl[kernel.dmesg_restrict]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.dmesg_restrict]/Sysctl_runtime[kernel.dmesg_restrict]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl[kernel.pid_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[kernel.pid_max]/Sysctl_runtime[kernel.pid_max]/val: val changed '32768' to '1048576'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl[net.core.netdev_max_backlog]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.core.netdev_max_backlog]/Sysctl_runtime[net.core.netdev_max_backlog]/val: val changed '1000' to '10000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl[net.ipv4.conf.all.arp_accept]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.arp_accept]/Sysctl_runtime[net.ipv4.conf.all.arp_accept]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl[net.ipv4.conf.all.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.log_martians]/Sysctl_runtime[net.ipv4.conf.all.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl[net.ipv4.conf.all.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.secure_redirects]/Sysctl_runtime[net.ipv4.conf.all.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl[net.ipv4.conf.all.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.all.send_redirects]/Sysctl_runtime[net.ipv4.conf.all.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl[net.ipv4.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.accept_redirects]/Sysctl_runtime[net.ipv4.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl[net.ipv4.conf.default.log_martians]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.log_martians]/Sysctl_runtime[net.ipv4.conf.default.log_martians]/val: val changed '0' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl[net.ipv4.conf.default.secure_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.secure_redirects]/Sysctl_runtime[net.ipv4.conf.default.secure_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl[net.ipv4.conf.default.send_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.conf.default.send_redirects]/Sysctl_runtime[net.ipv4.conf.default.send_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.ip_nonlocal_bind]/Sysctl[net.ipv4.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl[net.ipv4.neigh.default.gc_thresh1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh1]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh1]/val: val changed '128' to '1024'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl[net.ipv4.neigh.default.gc_thresh2]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh2]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh2]/val: val changed '512' to '2048'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl[net.ipv4.neigh.default.gc_thresh3]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.neigh.default.gc_thresh3]/Sysctl_runtime[net.ipv4.neigh.default.gc_thresh3]/val: val changed '1024' to '4096'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl[net.ipv4.tcp_keepalive_intvl]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_intvl]/Sysctl_runtime[net.ipv4.tcp_keepalive_intvl]/val: val changed '75' to '1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl[net.ipv4.tcp_keepalive_probes]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_probes]/Sysctl_runtime[net.ipv4.tcp_keepalive_probes]/val: val changed '9' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl[net.ipv4.tcp_keepalive_time]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv4.tcp_keepalive_time]/Sysctl_runtime[net.ipv4.tcp_keepalive_time]/val: val changed '7200' to '5'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl[net.ipv6.conf.all.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_ra]/Sysctl_runtime[net.ipv6.conf.all.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl[net.ipv6.conf.all.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.accept_redirects]/Sysctl_runtime[net.ipv6.conf.all.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl[net.ipv6.conf.all.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.autoconf]/Sysctl_runtime[net.ipv6.conf.all.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl[net.ipv6.conf.default.accept_ra]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_ra]/Sysctl_runtime[net.ipv6.conf.default.accept_ra]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl[net.ipv6.conf.default.accept_redirects]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.accept_redirects]/Sysctl_runtime[net.ipv6.conf.default.accept_redirects]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl[net.ipv6.conf.default.autoconf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.autoconf]/Sysctl_runtime[net.ipv6.conf.default.autoconf]/val: val changed '1' to '0'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.conf.lo.disable_ipv6]/Sysctl[net.ipv6.conf.lo.disable_ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.ipv6.ip_nonlocal_bind]/Sysctl[net.ipv6.ip_nonlocal_bind]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl[net.netfilter.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.netfilter.nf_conntrack_max]/Sysctl_runtime[net.netfilter.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl[net.nf_conntrack_max]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Kernel/Sysctl::Value[net.nf_conntrack_max]/Sysctl_runtime[net.nf_conntrack_max]/val: val changed '262144' to '500000'", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}e9fa538db4f9b8222a5de59841d0dcf7' to '{md5}3534841fdb8db5b58d66600a60bf3759'", > "Notice: /Stage[main]/Ssh::Server::Service/Service[sshd]: Triggered 'refresh' from 2 events", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[000 accept related established rules]/Firewall[000 accept related established rules ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[001 accept all icmp]/Firewall[001 accept all icmp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[002 accept all to lo interface]/Firewall[002 accept all to lo interface ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[003 accept ssh]/Firewall[003 accept ssh ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Pre/Tripleo::Firewall::Rule[004 accept ipv6 dhcpv6]/Firewall[004 accept ipv6 dhcpv6 ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ceph_osd]/Tripleo::Firewall::Rule[111 ceph_osd]/Firewall[111 ceph_osd ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[ntp]/Tripleo::Firewall::Rule[105 ntp]/Firewall[105 ntp ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[998 log all]/Firewall[998 log all ipv6]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv4]/ensure: created", > "Notice: /Stage[main]/Tripleo::Firewall::Post/Tripleo::Firewall::Rule[999 drop all]/Firewall[999 drop all ipv6]/ensure: created", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/iptables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: /Stage[main]/Firewall::Linux::Redhat/File[/etc/sysconfig/ip6tables]/seluser: seluser changed 'unconfined_u' to 'system_u'", > "Notice: Applied catalog in 6.66 seconds", > "Changes:", > " Total: 92", > "Events:", > " Success: 92", > "Resources:", > " Total: 135", > " Restarted: 3", > " Out of sync: 92", > " Changed: 92", > "Time:", > " Concat fragment: 0.00", > " Filebucket: 0.00", > " Concat file: 0.00", > " Anchor: 0.00", > " Cron: 0.00", > " Schedule: 0.00", > " Package manifest: 0.00", > " Augeas: 0.02", > " Sysctl: 0.12", > " File: 0.14", > " Sysctl runtime: 0.17", > " Package: 0.24", > " Service: 1.28", > " Firewall: 1.60", > " Exec: 1.83", > " Last run: 1534760682", > " Config retrieval: 2.04", > " Total: 7.44", > "Version:", > " Config: 1534760674", > " Puppet: 4.8.2", > "Warning: Undefined variable '::deploy_config_name'; ", > " (file & line not available)", > "Warning: Undefined variable 'deploy_config_name'; ", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 54]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 55]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 66]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 68]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > " with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ntp/manifests/init.pp\", 76]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/time/ntp.pp\", 29]", > "Warning: tag is a metaparam; this value will inherit to all contained resources in the tripleo::firewall::rule definition", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/firewall/rule.pp\", 148]:" > ] >} >2018-08-20 06:25:53,981 p=1013 u=mistral | TASK [Run docker-puppet tasks (generate config) during step 1] ***************** >2018-08-20 06:25:53,981 p=1013 u=mistral | Monday 20 August 2018 06:25:53 -0400 (0:00:00.841) 0:06:36.011 ********* >2018-08-20 06:26:15,663 p=1013 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:26:49,701 p=1013 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:28:42,296 p=1013 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:28:42,329 p=1013 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (generate config) during step 1] *** >2018-08-20 06:28:42,329 p=1013 u=mistral | Monday 20 August 2018 06:28:42 -0400 (0:02:48.347) 0:09:24.359 ********* >2018-08-20 06:28:42,522 p=1013 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-08-20 10:25:54,260 INFO: 16591 -- Running docker-puppet", > "2018-08-20 10:25:54,260 DEBUG: 16591 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-08-20 10:25:54,261 DEBUG: 16591 -- config_volume crond", > "2018-08-20 10:25:54,261 DEBUG: 16591 -- puppet_tags ", > "2018-08-20 10:25:54,261 DEBUG: 16591 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-08-20 10:25:54,261 DEBUG: 16591 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:25:54,261 DEBUG: 16591 -- volumes []", > "2018-08-20 10:25:54,261 DEBUG: 16591 -- Adding new service", > "2018-08-20 10:25:54,261 INFO: 16591 -- Service compilation completed.", > "2018-08-20 10:25:54,262 DEBUG: 16591 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2', []]", > "2018-08-20 10:25:54,262 INFO: 16591 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-08-20 10:25:54,273 INFO: 16592 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:25:54,273 DEBUG: 16592 -- config_volume crond", > "2018-08-20 10:25:54,273 DEBUG: 16592 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-08-20 10:25:54,273 DEBUG: 16592 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-08-20 10:25:54,273 DEBUG: 16592 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:25:54,273 DEBUG: 16592 -- volumes []", > "2018-08-20 10:25:54,274 INFO: 16592 -- Removing container: docker-puppet-crond", > "2018-08-20 10:25:54,355 INFO: 16592 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:26:07,600 DEBUG: 16592 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "b0b426385936: Pulling fs layer", > "919f91872d6f: Pulling fs layer", > "919f91872d6f: Waiting", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "919f91872d6f: Verifying Checksum", > "919f91872d6f: Download complete", > "b0b426385936: Verifying Checksum", > "b0b426385936: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "b0b426385936: Pull complete", > "919f91872d6f: Pull complete", > "Digest: sha256:373f758caa0aef7f9e786c29b62a7665961ad46e10b1981de52c43135c4f20f7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "", > "2018-08-20 10:26:07,604 DEBUG: 16592 -- NET_HOST enabled", > "2018-08-20 10:26:07,604 DEBUG: 16592 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=ceph-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpqx7Uoq:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:26:15,550 DEBUG: 16592 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for ceph-0.localdomain in environment production in 0.52 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}5281f207697925ddab4d83d74a751eb4'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.59", > " Total: 0.60", > " Last run: 1534760774", > "Version:", > " Config: 1534760774", > " Puppet: 4.8.2", > "Gathering files modified after 2018-08-20 10:26:07.884171925 +0000", > "2018-08-20 10:26:15,551 DEBUG: 16592 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=ceph-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:07.884171925 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ md5sum", > "tar: Removing leading `/' from member names", > "+ awk '{print $1}'", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-08-20 10:26:15,551 INFO: 16592 -- Removing container: docker-puppet-crond", > "2018-08-20 10:26:15,587 DEBUG: 16592 -- docker-puppet-crond", > "2018-08-20 10:26:15,587 INFO: 16592 -- Finished processing puppet configs for crond", > "2018-08-20 10:26:15,588 DEBUG: 16591 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-08-20 10:26:15,588 DEBUG: 16591 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-08-20 10:26:15,591 DEBUG: 16591 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-08-20 10:26:15,591 DEBUG: 16591 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-08-20 10:26:15,591 DEBUG: 16591 -- Updating config hash for logrotate_crond, config_volume=crond hash=df05421b7d05c901ea2660ea0aed61b6" > ] >} >2018-08-20 06:28:43,332 p=1013 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-08-20 10:25:54,275 INFO: 18529 -- Running docker-puppet", > "2018-08-20 10:25:54,275 DEBUG: 18529 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-08-20 10:25:54,275 DEBUG: 18529 -- config_volume ceilometer", > "2018-08-20 10:25:54,276 DEBUG: 18529 -- puppet_tags ceilometer_config", > "2018-08-20 10:25:54,276 DEBUG: 18529 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "", > "2018-08-20 10:25:54,276 DEBUG: 18529 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:25:54,276 DEBUG: 18529 -- volumes []", > "2018-08-20 10:25:54,276 DEBUG: 18529 -- Adding new service", > "2018-08-20 10:25:54,276 DEBUG: 18529 -- config_volume neutron", > "2018-08-20 10:25:54,276 DEBUG: 18529 -- puppet_tags neutron_plugin_ml2", > "2018-08-20 10:25:54,276 DEBUG: 18529 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-08-20 10:25:54,276 DEBUG: 18529 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- config_volume neutron", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- config_volume iscsid", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- puppet_tags iscsid_config", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- manifest include ::tripleo::profile::base::iscsid", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- Adding new service", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- config_volume nova_libvirt", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- puppet_tags nova_config,nova_paste_api_ini", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "# We'll probably treat it like we do with Neutron plugins.", > "# Until then, just include it in the default nova-compute role.", > "include tripleo::profile::base::nova::compute::libvirt", > "include ::tripleo::profile::base::database::mysql::client", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", > "2018-08-20 10:25:54,277 DEBUG: 18529 -- volumes []", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- puppet_tags libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- manifest include tripleo::profile::base::nova::libvirt", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- volumes []", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- config_volume nova_libvirt", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- puppet_tags ", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- manifest include ::tripleo::profile::base::sshd", > "include tripleo::profile::base::nova::migration::target", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- config_volume crond", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:25:54,278 DEBUG: 18529 -- Adding new service", > "2018-08-20 10:25:54,278 INFO: 18529 -- Service compilation completed.", > "2018-08-20 10:25:54,279 DEBUG: 18529 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2', []]", > "2018-08-20 10:25:54,279 DEBUG: 18529 -- - [u'nova_libvirt', u'file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password', u\"# TODO(emilien): figure how to deal with libvirt profile.\\n# We'll probably treat it like we do with Neutron plugins.\\n# Until then, just include it in the default nova-compute role.\\ninclude tripleo::profile::base::nova::compute::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::libvirt\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sshd\\ninclude tripleo::profile::base::nova::migration::target\", u'192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2', []]", > "2018-08-20 10:25:54,279 DEBUG: 18529 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2', []]", > "2018-08-20 10:25:54,279 DEBUG: 18529 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-08-20 10:25:54,279 DEBUG: 18529 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2', [u'/etc/iscsi:/etc/iscsi']]", > "2018-08-20 10:25:54,279 INFO: 18529 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-08-20 10:25:54,292 INFO: 18530 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:25:54,292 INFO: 18531 -- Starting configuration of nova_libvirt using image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", > "2018-08-20 10:25:54,293 DEBUG: 18530 -- config_volume ceilometer", > "2018-08-20 10:25:54,293 DEBUG: 18531 -- config_volume nova_libvirt", > "2018-08-20 10:25:54,293 DEBUG: 18530 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config", > "2018-08-20 10:25:54,293 DEBUG: 18531 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password", > "2018-08-20 10:25:54,293 DEBUG: 18530 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-08-20 10:25:54,293 DEBUG: 18531 -- manifest # TODO(emilien): figure how to deal with libvirt profile.", > "include tripleo::profile::base::nova::libvirt", > "include ::tripleo::profile::base::sshd", > "2018-08-20 10:25:54,293 DEBUG: 18530 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:25:54,293 DEBUG: 18531 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", > "2018-08-20 10:25:54,293 DEBUG: 18531 -- volumes []", > "2018-08-20 10:25:54,293 DEBUG: 18530 -- volumes []", > "2018-08-20 10:25:54,294 INFO: 18530 -- Removing container: docker-puppet-ceilometer", > "2018-08-20 10:25:54,294 INFO: 18531 -- Removing container: docker-puppet-nova_libvirt", > "2018-08-20 10:25:54,295 INFO: 18532 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:25:54,295 DEBUG: 18532 -- config_volume crond", > "2018-08-20 10:25:54,295 DEBUG: 18532 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-08-20 10:25:54,295 DEBUG: 18532 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-08-20 10:25:54,296 DEBUG: 18532 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:25:54,296 DEBUG: 18532 -- volumes []", > "2018-08-20 10:25:54,297 INFO: 18532 -- Removing container: docker-puppet-crond", > "2018-08-20 10:25:54,393 INFO: 18530 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:25:54,394 INFO: 18531 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", > "2018-08-20 10:25:54,394 INFO: 18532 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:26:08,109 DEBUG: 18532 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "b0b426385936: Pulling fs layer", > "919f91872d6f: Pulling fs layer", > "919f91872d6f: Waiting", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "919f91872d6f: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "b0b426385936: Verifying Checksum", > "b0b426385936: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "b0b426385936: Pull complete", > "919f91872d6f: Pull complete", > "Digest: sha256:373f758caa0aef7f9e786c29b62a7665961ad46e10b1981de52c43135c4f20f7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:26:08,115 DEBUG: 18532 -- NET_HOST enabled", > "2018-08-20 10:26:08,115 DEBUG: 18532 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp7ENnMi:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:26:13,790 DEBUG: 18530 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "bfd71860b3fc: Pulling fs layer", > "8e8e24e487c6: Pulling fs layer", > "abd90b860525: Pulling fs layer", > "bfd71860b3fc: Waiting", > "8e8e24e487c6: Waiting", > "abd90b860525: Waiting", > "bfd71860b3fc: Verifying Checksum", > "bfd71860b3fc: Download complete", > "8e8e24e487c6: Verifying Checksum", > "8e8e24e487c6: Download complete", > "abd90b860525: Download complete", > "bfd71860b3fc: Pull complete", > "8e8e24e487c6: Pull complete", > "abd90b860525: Pull complete", > "Digest: sha256:b64134d855985bb79b71c51feabab7a9a4b3c5055bc40ae3a46583ce2f945685", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:26:13,800 DEBUG: 18530 -- NET_HOST enabled", > "2018-08-20 10:26:13,800 DEBUG: 18530 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config --env NAME=ceilometer --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp28oFQn:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:26:16,494 DEBUG: 18532 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.49 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}5281f207697925ddab4d83d74a751eb4'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > "Notice: Applied catalog in 0.04 seconds", > "Changes:", > " Total: 2", > "Events:", > " Success: 2", > "Resources:", > " Changed: 2", > " Out of sync: 2", > " Skipped: 7", > " Total: 9", > "Time:", > " File: 0.00", > " Cron: 0.01", > " Config retrieval: 0.59", > " Total: 0.61", > " Last run: 1534760775", > "Version:", > " Config: 1534760775", > " Puppet: 4.8.2", > "Gathering files modified after 2018-08-20 10:26:08.470047480 +0000", > "2018-08-20 10:26:16,494 DEBUG: 18532 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=compute-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:08.470047480 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ md5sum", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-08-20 10:26:16,494 INFO: 18532 -- Removing container: docker-puppet-crond", > "2018-08-20 10:26:16,534 DEBUG: 18532 -- docker-puppet-crond", > "2018-08-20 10:26:16,534 INFO: 18532 -- Finished processing puppet configs for crond", > "2018-08-20 10:26:16,535 INFO: 18532 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:26:16,536 DEBUG: 18532 -- config_volume neutron", > "2018-08-20 10:26:16,536 DEBUG: 18532 -- puppet_tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-08-20 10:26:16,536 DEBUG: 18532 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "include ::tripleo::profile::base::neutron::ovs", > "2018-08-20 10:26:16,536 DEBUG: 18532 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:26:16,536 DEBUG: 18532 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-08-20 10:26:16,537 INFO: 18532 -- Removing container: docker-puppet-neutron", > "2018-08-20 10:26:16,604 INFO: 18532 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:26:23,546 DEBUG: 18532 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "b0b426385936: Already exists", > "bfd71860b3fc: Already exists", > "0005d75b4b48: Pulling fs layer", > "f7e4f140def4: Pulling fs layer", > "f7e4f140def4: Verifying Checksum", > "f7e4f140def4: Download complete", > "0005d75b4b48: Download complete", > "0005d75b4b48: Pull complete", > "f7e4f140def4: Pull complete", > "Digest: sha256:0cd0e9583a7627f44e295392eeb86e50c799918cf38ebd12c36a0714f43b759b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:26:23,555 DEBUG: 18532 -- NET_HOST enabled", > "2018-08-20 10:26:23,556 DEBUG: 18532 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpGngsqb:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:26:23,948 DEBUG: 18530 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 1.12 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.65 seconds", > " Total: 24", > " Success: 24", > " Total: 139", > " Skipped: 22", > " Out of sync: 24", > " Changed: 24", > " Resources: 0.00", > " Ceilometer config: 0.53", > " Config retrieval: 1.36", > " Total: 1.89", > " Last run: 1534760782", > " Config: 1534760780", > "Gathering files modified after 2018-08-20 10:26:14.138035784 +0000", > "2018-08-20 10:26:23,948 DEBUG: 18530 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config /etc/config.pp", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:14.138035784 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-08-20 10:26:23,948 INFO: 18530 -- Removing container: docker-puppet-ceilometer", > "2018-08-20 10:26:24,126 DEBUG: 18530 -- docker-puppet-ceilometer", > "2018-08-20 10:26:24,126 INFO: 18530 -- Finished processing puppet configs for ceilometer", > "2018-08-20 10:26:24,126 INFO: 18530 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:26:24,127 DEBUG: 18530 -- config_volume iscsid", > "2018-08-20 10:26:24,127 DEBUG: 18530 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-08-20 10:26:24,127 DEBUG: 18530 -- manifest include ::tripleo::profile::base::iscsid", > "2018-08-20 10:26:24,127 DEBUG: 18530 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:26:24,127 DEBUG: 18530 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-08-20 10:26:24,128 INFO: 18530 -- Removing container: docker-puppet-iscsid", > "2018-08-20 10:26:24,229 INFO: 18530 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:26:24,984 DEBUG: 18530 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "f989f56727fb: Pulling fs layer", > "f989f56727fb: Download complete", > "f989f56727fb: Pull complete", > "Digest: sha256:1fed697b95f255d2ed0c3ff9331f96cff5d71bb8b695d3004417b945b8902cdb", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:26:24,988 DEBUG: 18530 -- NET_HOST enabled", > "2018-08-20 10:26:24,989 DEBUG: 18530 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpOyMWyy:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:26:30,395 DEBUG: 18531 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-compute ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-compute", > "a09ac8d7bbe3: Pulling fs layer", > "b4c1dc3668df: Pulling fs layer", > "a09ac8d7bbe3: Waiting", > "b4c1dc3668df: Waiting", > "a09ac8d7bbe3: Verifying Checksum", > "a09ac8d7bbe3: Download complete", > "b4c1dc3668df: Verifying Checksum", > "b4c1dc3668df: Download complete", > "a09ac8d7bbe3: Pull complete", > "b4c1dc3668df: Pull complete", > "Digest: sha256:f29f80c5aaea7db96e6029b9cf76ab408a01b8a00b7fc276a8960d8adff622ae", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", > "2018-08-20 10:26:30,398 DEBUG: 18531 -- NET_HOST enabled", > "2018-08-20 10:26:30,399 DEBUG: 18531 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_libvirt --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env HOSTNAME=compute-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpzA7zdy:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-compute:2018-08-17.2", > "2018-08-20 10:26:32,995 DEBUG: 18530 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 0.52 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 10", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.66", > " Total: 0.69", > " Last run: 1534760792", > " Config: 1534760791", > "Gathering files modified after 2018-08-20 10:26:25.232013628 +0000", > "2018-08-20 10:26:32,995 DEBUG: 18530 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:25.232013628 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-08-20 10:26:32,995 INFO: 18530 -- Removing container: docker-puppet-iscsid", > "2018-08-20 10:26:33,034 DEBUG: 18530 -- docker-puppet-iscsid", > "2018-08-20 10:26:33,034 INFO: 18530 -- Finished processing puppet configs for iscsid", > "2018-08-20 10:26:35,049 DEBUG: 18532 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.48 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 0.78 seconds", > " Total: 45", > " Success: 45", > " Total: 174", > " Skipped: 27", > " Out of sync: 45", > " Changed: 45", > " Neutron agent ovs: 0.02", > " Neutron plugin ml2: 0.03", > " Neutron config: 0.60", > " Last run: 1534760793", > " Config retrieval: 2.72", > " Total: 3.37", > " Config: 1534760790", > "Gathering files modified after 2018-08-20 10:26:23.813016409 +0000", > "2018-08-20 10:26:35,050 DEBUG: 18532 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_plugin_ml2,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 486]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/plugins/ml2.pp\", 53]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync_srcs+=' /var/www'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:23.813016409 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-08-20 10:26:35,050 INFO: 18532 -- Removing container: docker-puppet-neutron", > "2018-08-20 10:26:35,096 DEBUG: 18532 -- docker-puppet-neutron", > "2018-08-20 10:26:35,096 INFO: 18532 -- Finished processing puppet configs for neutron", > "2018-08-20 10:26:49,520 DEBUG: 18531 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for compute-0.localdomain in environment production in 2.60 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{md5}056b96e7e8124e1bc55f77cba4e68ce7' to '{md5}ca0bd1c16023ba4db833da9842a429b4'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{md5}09c4fa846e8e27bfa3ab3325900d63ea' to '{md5}2f138c0278e1b666ec77a6d8ba3054a1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{md5}dff145cb4e519333c0096aae8de2e77c' to '{md5}bee4373758b904631513a8691a9d15e1'", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/heal_instance_info_cache_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/vncserver_proxyclient_address]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/keymap]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created", > "Notice: /Stage[main]/Nova::Compute/Nova_config[glance/verify_glance_signatures]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tls]/ensure: created", > "Notice: /Stage[main]/Nova::Migration::Libvirt/Libvirtd_config[listen_tcp]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{md5}c3bd8b43a09dabb5d90138d7d4368be4'", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/vncserver_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_group]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_ro]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[auth_unix_rw]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_ro_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Libvirtd_config[unix_sock_rw_perms]/ensure: created", > "Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/content: content changed '{md5}40d961cd3154f0439fcac1a50bd77b96' to '{md5}612ce064457d8bf508d746dbc1ab9618'", > "Notice: Applied catalog in 8.17 seconds", > " Total: 105", > " Success: 105", > " Changed: 105", > " Out of sync: 105", > " Total: 323", > " Skipped: 48", > " Concat file: 0.00", > " Concat fragment: 0.00", > " File line: 0.00", > " Libvirtd config: 0.02", > " File: 0.08", > " Package: 0.09", > " Augeas: 0.99", > " Total: 10.74", > " Last run: 1534760808", > " Config retrieval: 2.98", > " Nova config: 6.56", > " Config: 1534760797", > "Gathering files modified after 2018-08-20 10:26:30.610003228 +0000", > "2018-08-20 10:26:49,521 DEBUG: 18531 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password'", > "+ origin_of_time=/var/lib/config-data/nova_libvirt.origin_of_time", > "+ touch /var/lib/config-data/nova_libvirt.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_paste_api_ini,libvirtd_config,nova_config,file,libvirt_tls_password /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 545]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 555]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute.pp\", 59]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Unknown variable: '::nova::vncproxy::host'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:31:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_protocol'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:36:5", > "Warning: Unknown variable: '::nova::vncproxy::port'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:41:5", > "Warning: Unknown variable: '::nova::vncproxy::vncproxy_path'. at /etc/puppet/modules/nova/manifests/vncproxy/common.pp:46:5", > "Warning: Unknown variable: '::nova::compute::pci_passthrough'. at /etc/puppet/modules/nova/manifests/compute/pci.pp:19:38", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/compute/libvirt.pp\", 278]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/compute/libvirt.pp\", 33]", > " with Stdlib::Compat::Ip_Address. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/migration/target.pp\", 56]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Exec[set libvirt sasl credentials](provider=posix): Cannot understand environment setting \"TLS_PASSWORD=\"", > "+ rsync_srcs+=' /var/lib/nova/.ssh'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/nova/.ssh /var/lib/config-data/nova_libvirt", > "++ stat -c %y /var/lib/config-data/nova_libvirt.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:30.610003228 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_libvirt", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_libvirt", > "++ find /etc /root /opt /var/spool/cron /var/lib/nova/.ssh -newer /var/lib/config-data/nova_libvirt.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_libvirt --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_libvirt --mtime=1970-01-01", > "2018-08-20 10:26:49,521 INFO: 18531 -- Removing container: docker-puppet-nova_libvirt", > "2018-08-20 10:26:49,571 DEBUG: 18531 -- docker-puppet-nova_libvirt", > "2018-08-20 10:26:49,571 INFO: 18531 -- Finished processing puppet configs for nova_libvirt", > "2018-08-20 10:26:49,572 DEBUG: 18529 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-08-20 10:26:49,573 DEBUG: 18529 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-08-20 10:26:49,577 DEBUG: 18529 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:26:49,577 DEBUG: 18529 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:26:49,577 DEBUG: 18529 -- Updating config hash for neutron_ovs_bridge, config_volume=iscsid hash=a60276dc4caa9f26715c928f234ad2fc", > "2018-08-20 10:26:49,577 DEBUG: 18529 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-08-20 10:26:49,577 DEBUG: 18529 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-08-20 10:26:49,578 DEBUG: 18529 -- Updating config hash for nova_libvirt, config_volume=iscsid hash=7372c0b984f61a17a9364830e0ecf6ff", > "2018-08-20 10:26:49,578 DEBUG: 18529 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-08-20 10:26:49,578 DEBUG: 18529 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-08-20 10:26:49,578 DEBUG: 18529 -- Updating config hash for nova_virtlogd, config_volume=iscsid hash=7372c0b984f61a17a9364830e0ecf6ff", > "2018-08-20 10:26:49,581 DEBUG: 18529 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-08-20 10:26:49,581 DEBUG: 18529 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-08-20 10:26:49,581 DEBUG: 18529 -- Updating config hash for ceilometer_agent_compute, config_volume=iscsid hash=40962edc8700d2c630e0a6ca93d4b75f", > "2018-08-20 10:26:49,581 DEBUG: 18529 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt/etc", > "2018-08-20 10:26:49,581 DEBUG: 18529 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:26:49,582 DEBUG: 18529 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:26:49,582 DEBUG: 18529 -- Updating config hash for neutron_ovs_agent, config_volume=iscsid hash=a60276dc4caa9f26715c928f234ad2fc", > "2018-08-20 10:26:49,582 DEBUG: 18529 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-08-20 10:26:49,582 DEBUG: 18529 -- Got hashfile /var/lib/config-data/puppet-generated/nova_libvirt.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_libvirt", > "2018-08-20 10:26:49,582 DEBUG: 18529 -- Updating config hash for nova_migration_target, config_volume=iscsid hash=7372c0b984f61a17a9364830e0ecf6ff", > "2018-08-20 10:26:49,582 DEBUG: 18529 -- Updating config hash for nova_compute, config_volume=iscsid hash=7372c0b984f61a17a9364830e0ecf6ff", > "2018-08-20 10:26:49,583 DEBUG: 18529 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-08-20 10:26:49,583 DEBUG: 18529 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-08-20 10:26:49,583 DEBUG: 18529 -- Updating config hash for logrotate_crond, config_volume=iscsid hash=22d48170a2ff615b614e015c0771323c" > ] >} >2018-08-20 06:28:43,581 p=1013 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "2018-08-20 10:25:54,214 INFO: 28337 -- Running docker-puppet", > "2018-08-20 10:25:54,215 DEBUG: 28337 -- CONFIG: /var/lib/docker-puppet/docker-puppet.json", > "2018-08-20 10:25:54,215 DEBUG: 28337 -- config_volume aodh", > "2018-08-20 10:25:54,215 DEBUG: 28337 -- puppet_tags aodh_api_paste_ini,aodh_config", > "2018-08-20 10:25:54,215 DEBUG: 28337 -- manifest include tripleo::profile::base::aodh::api", > "", > "include ::tripleo::profile::base::database::mysql::client", > "2018-08-20 10:25:54,215 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", > "2018-08-20 10:25:54,215 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- config_volume aodh", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- puppet_tags aodh_config", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- manifest include tripleo::profile::base::aodh::evaluator", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- manifest include tripleo::profile::base::aodh::listener", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- manifest include tripleo::profile::base::aodh::notifier", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- config_volume ceilometer", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- puppet_tags ceilometer_config", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "2018-08-20 10:25:54,216 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- config_volume ceilometer", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- puppet_tags ceilometer_config", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- manifest include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- config_volume cinder", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- puppet_tags cinder_config,file,concat,file_line", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- manifest include ::tripleo::profile::base::cinder::api", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- manifest include ::tripleo::profile::base::cinder::backup::ceph", > "2018-08-20 10:25:54,217 DEBUG: 28337 -- manifest include ::tripleo::profile::base::cinder::scheduler", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- config_volume cinder", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- puppet_tags cinder_config,file,concat,file_line", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- manifest include ::tripleo::profile::base::lvm", > "include ::tripleo::profile::base::cinder::volume", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- config_volume clustercheck", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- puppet_tags file", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- config_volume glance_api", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- puppet_tags glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- manifest include ::tripleo::profile::base::glance::api", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- config_volume gnocchi", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- puppet_tags gnocchi_api_paste_ini,gnocchi_config", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- manifest include ::tripleo::profile::base::gnocchi::api", > "2018-08-20 10:25:54,218 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- config_volume gnocchi", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- puppet_tags gnocchi_config", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- manifest include ::tripleo::profile::base::gnocchi::metricd", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- manifest include ::tripleo::profile::base::gnocchi::statsd", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- config_volume haproxy", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- puppet_tags haproxy_config", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "class tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}", > "['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::pacemaker::haproxy_bundle", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- config_volume heat_api", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- puppet_tags heat_config,file,concat,file_line", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- manifest include ::tripleo::profile::base::heat::api", > "2018-08-20 10:25:54,219 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- config_volume heat_api_cfn", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- puppet_tags heat_config,file,concat,file_line", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-08-17.2", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- config_volume heat", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- config_volume horizon", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- puppet_tags horizon_config", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- manifest include ::tripleo::profile::base::horizon", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- config_volume iscsid", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- puppet_tags iscsid_config", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- manifest include ::tripleo::profile::base::iscsid", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:25:54,220 DEBUG: 28337 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- config_volume keystone", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- puppet_tags keystone_config,keystone_domain_config", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::keystone", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- config_volume memcached", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- puppet_tags file", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- manifest include ::tripleo::profile::base::memcached", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-08-17.2", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- config_volume mysql", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "exec {'wait-for-settle': command => '/bin/true' }", > "include ::tripleo::profile::pacemaker::database::mysql_bundle", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- config_volume neutron", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- puppet_tags neutron_config,neutron_api_config", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- manifest include tripleo::profile::base::neutron::server", > "2018-08-20 10:25:54,221 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- config_volume neutron", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- puppet_tags neutron_plugin_ml2", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- manifest include ::tripleo::profile::base::neutron::plugins::ml2", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- puppet_tags neutron_config,neutron_dhcp_agent_config", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- manifest include tripleo::profile::base::neutron::dhcp", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- puppet_tags neutron_config,neutron_l3_agent_config", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- manifest include tripleo::profile::base::neutron::l3", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- puppet_tags neutron_config,neutron_metadata_agent_config", > "2018-08-20 10:25:54,222 DEBUG: 28337 -- manifest include tripleo::profile::base::neutron::metadata", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- config_volume neutron", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- puppet_tags neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- manifest include ::tripleo::profile::base::neutron::ovs", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- config_volume nova", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- puppet_tags nova_config", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::api", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- manifest include tripleo::profile::base::nova::conductor", > "2018-08-20 10:25:54,223 DEBUG: 28337 -- manifest include tripleo::profile::base::nova::consoleauth", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- config_volume nova_placement", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- puppet_tags nova_config", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- manifest include tripleo::profile::base::nova::placement", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- config_volume nova", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- manifest include tripleo::profile::base::nova::scheduler", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- manifest include tripleo::profile::base::nova::vncproxy", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- config_volume crond", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- puppet_tags ", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-08-20 10:25:54,224 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- config_volume panko", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- puppet_tags panko_api_paste_ini,panko_config", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- manifest include tripleo::profile::base::panko::api", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- config_volume rabbitmq", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- puppet_tags file", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "include ::tripleo::profile::base::rabbitmq", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- config_volume redis", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- puppet_tags exec", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- config_volume sahara", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- puppet_tags sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- manifest include ::tripleo::profile::base::sahara::api", > "2018-08-20 10:25:54,225 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- Adding new service", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- config_volume sahara", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- puppet_tags sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- manifest include ::tripleo::profile::base::sahara::engine", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- volumes []", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- config_volume swift", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- puppet_tags swift_config,swift_proxy_config,swift_keymaster_config", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- manifest include ::tripleo::profile::base::swift::proxy", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- config_volume swift_ringbuilder", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- puppet_tags exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- puppet_tags swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-08-20 10:25:54,226 DEBUG: 28337 -- manifest include ::tripleo::profile::base::swift::storage", > "class xinetd() {}", > "2018-08-20 10:25:54,227 DEBUG: 28337 -- Existing service, appending puppet tags and manifest", > "2018-08-20 10:25:54,227 INFO: 28337 -- Service compilation completed.", > "2018-08-20 10:25:54,227 DEBUG: 28337 -- - [u'nova_placement', u'file,file_line,concat,augeas,cron,nova_config', u'include tripleo::profile::base::nova::placement\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,227 DEBUG: 28337 -- - [u'aodh', u'file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config', u'include tripleo::profile::base::aodh::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::evaluator\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::listener\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::aodh::notifier\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,227 DEBUG: 28337 -- - [u'heat_api', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,227 DEBUG: 28337 -- - [u'swift_ringbuilder', u'file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball', u'include ::tripleo::profile::base::swift::ringbuilder', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2', []]", > "2018-08-20 10:25:54,227 DEBUG: 28337 -- - [u'sahara', u'file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template', u'include ::tripleo::profile::base::sahara::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::sahara::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,227 DEBUG: 28337 -- - [u'mysql', u'file,file_line,concat,augeas,cron,file', u\"['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }\\nexec {'wait-for-settle': command => '/bin/true' }\\ninclude ::tripleo::profile::pacemaker::database::mysql_bundle\", u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2', []]", > "2018-08-20 10:25:54,227 DEBUG: 28337 -- - [u'gnocchi', u'file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config', u'include ::tripleo::profile::base::gnocchi::api\\n\\ninclude ::tripleo::profile::base::gnocchi::metricd\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::gnocchi::statsd\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'clustercheck', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::pacemaker::clustercheck', u'192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'redis', u'file,file_line,concat,augeas,cron,exec', u'include ::tripleo::profile::pacemaker::database::redis_bundle', u'192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'nova', u'file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config', u\"['Nova_cell_v2'].each |String $val| { noop_resource($val) }\\ninclude tripleo::profile::base::nova::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::conductor\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::consoleauth\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude tripleo::profile::base::nova::vncproxy\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'iscsid', u'file,file_line,concat,augeas,cron,iscsid_config', u'include ::tripleo::profile::base::iscsid', u'192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2', [u'/etc/iscsi:/etc/iscsi']]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'glance_api', u'file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config', u'include ::tripleo::profile::base::glance::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'keystone', u'file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config', u\"['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::keystone\\n\\ninclude ::tripleo::profile::base::database::mysql::client\", u'192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'memcached', u'file,file_line,concat,augeas,cron,file', u'include ::tripleo::profile::base::memcached\\n', u'192.168.24.1:8787/rhosp14/openstack-memcached:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'panko', u'file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config', u'include tripleo::profile::base::panko::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'heat', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::engine\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'cinder', u'file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line', u'include ::tripleo::profile::base::cinder::api\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::backup::ceph\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::cinder::scheduler\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::lvm\\ninclude ::tripleo::profile::base::cinder::volume\\n\\ninclude ::tripleo::profile::base::database::mysql::client', u'192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'swift', u'file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server', u'include ::tripleo::profile::base::swift::proxy\\n\\ninclude ::tripleo::profile::base::swift::storage\\n\\nclass xinetd() {}', u'192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'crond', 'file,file_line,concat,augeas,cron', u'include ::tripleo::profile::base::logging::logrotate', u'192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'haproxy', u'file,file_line,concat,augeas,cron,haproxy_config', u\"exec {'wait-for-settle': command => '/bin/true' }\\nclass tripleo::firewall(){}; define tripleo::firewall::rule( $port = undef, $dport = undef, $sport = undef, $proto = undef, $action = undef, $state = undef, $source = undef, $iniface = undef, $chain = undef, $destination = undef, $extras = undef){}\\n['pcmk_bundle', 'pcmk_resource', 'pcmk_property', 'pcmk_constraint', 'pcmk_resource_default'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::pacemaker::haproxy_bundle\", u'192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2', [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'ceilometer', u'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', u'include ::tripleo::profile::base::ceilometer::agent::polling\\n\\ninclude ::tripleo::profile::base::ceilometer::agent::notification\\n', u'192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'rabbitmq', u'file,file_line,concat,augeas,cron,file', u\"['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }\\ninclude ::tripleo::profile::base::rabbitmq\\n\", u'192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'neutron', u'file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2', u'include tripleo::profile::base::neutron::server\\n\\ninclude ::tripleo::profile::base::database::mysql::client\\ninclude ::tripleo::profile::base::neutron::plugins::ml2\\n\\ninclude tripleo::profile::base::neutron::dhcp\\n\\ninclude tripleo::profile::base::neutron::l3\\n\\ninclude tripleo::profile::base::neutron::metadata\\n\\ninclude ::tripleo::profile::base::neutron::ovs\\n', u'192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2', [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'horizon', u'file,file_line,concat,augeas,cron,horizon_config', u'include ::tripleo::profile::base::horizon\\n', u'192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 DEBUG: 28337 -- - [u'heat_api_cfn', u'file,file_line,concat,augeas,cron,heat_config,file,concat,file_line', u'include ::tripleo::profile::base::heat::api_cfn\\n', u'192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-08-17.2', []]", > "2018-08-20 10:25:54,228 INFO: 28337 -- Starting multiprocess configuration steps. Using 3 processes.", > "2018-08-20 10:25:54,239 INFO: 28338 -- Starting configuration of nova_placement using image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2", > "2018-08-20 10:25:54,240 DEBUG: 28338 -- config_volume nova_placement", > "2018-08-20 10:25:54,240 DEBUG: 28338 -- puppet_tags file,file_line,concat,augeas,cron,nova_config", > "2018-08-20 10:25:54,240 DEBUG: 28338 -- manifest include tripleo::profile::base::nova::placement", > "2018-08-20 10:25:54,240 DEBUG: 28338 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2", > "2018-08-20 10:25:54,240 DEBUG: 28338 -- volumes []", > "2018-08-20 10:25:54,240 INFO: 28339 -- Starting configuration of swift_ringbuilder using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:25:54,240 DEBUG: 28339 -- config_volume swift_ringbuilder", > "2018-08-20 10:25:54,240 DEBUG: 28339 -- puppet_tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball", > "2018-08-20 10:25:54,240 INFO: 28340 -- Starting configuration of gnocchi using image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", > "2018-08-20 10:25:54,240 DEBUG: 28339 -- manifest include ::tripleo::profile::base::swift::ringbuilder", > "2018-08-20 10:25:54,240 DEBUG: 28340 -- config_volume gnocchi", > "2018-08-20 10:25:54,240 DEBUG: 28339 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:25:54,240 DEBUG: 28339 -- volumes []", > "2018-08-20 10:25:54,240 DEBUG: 28340 -- puppet_tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config", > "2018-08-20 10:25:54,240 DEBUG: 28340 -- manifest include ::tripleo::profile::base::gnocchi::api", > "include ::tripleo::profile::base::gnocchi::metricd", > "include ::tripleo::profile::base::gnocchi::statsd", > "2018-08-20 10:25:54,241 DEBUG: 28340 -- config_image 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", > "2018-08-20 10:25:54,241 DEBUG: 28340 -- volumes []", > "2018-08-20 10:25:54,241 INFO: 28338 -- Removing container: docker-puppet-nova_placement", > "2018-08-20 10:25:54,242 INFO: 28340 -- Removing container: docker-puppet-gnocchi", > "2018-08-20 10:25:54,242 INFO: 28339 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-08-20 10:25:54,326 INFO: 28338 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2", > "2018-08-20 10:25:54,326 INFO: 28339 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:25:54,331 INFO: 28340 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", > "2018-08-20 10:26:13,167 DEBUG: 28339 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server", > "378837c0e24a: Pulling fs layer", > "e17262bc2341: Pulling fs layer", > "b0b426385936: Pulling fs layer", > "bfd71860b3fc: Pulling fs layer", > "0e832a4aedc5: Pulling fs layer", > "3dc1442f577c: Pulling fs layer", > "3dc1442f577c: Waiting", > "0e832a4aedc5: Waiting", > "bfd71860b3fc: Waiting", > "e17262bc2341: Verifying Checksum", > "e17262bc2341: Download complete", > "bfd71860b3fc: Verifying Checksum", > "bfd71860b3fc: Download complete", > "378837c0e24a: Verifying Checksum", > "378837c0e24a: Download complete", > "b0b426385936: Verifying Checksum", > "b0b426385936: Download complete", > "0e832a4aedc5: Verifying Checksum", > "0e832a4aedc5: Download complete", > "3dc1442f577c: Verifying Checksum", > "3dc1442f577c: Download complete", > "378837c0e24a: Pull complete", > "e17262bc2341: Pull complete", > "b0b426385936: Pull complete", > "bfd71860b3fc: Pull complete", > "0e832a4aedc5: Pull complete", > "3dc1442f577c: Pull complete", > "Digest: sha256:cf3607b0215dc130a8b60596702dadb7e8240f93bc273c6e14fb0ed5cc17235e", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:26:13,170 DEBUG: 28339 -- NET_HOST enabled", > "2018-08-20 10:26:13,170 DEBUG: 28339 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift_ringbuilder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball --env NAME=swift_ringbuilder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpm_crtj:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:26:18,094 DEBUG: 28338 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-placement-api ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-placement-api", > "a09ac8d7bbe3: Pulling fs layer", > "b6120cdcbaae: Pulling fs layer", > "a09ac8d7bbe3: Waiting", > "b6120cdcbaae: Waiting", > "b6120cdcbaae: Download complete", > "a09ac8d7bbe3: Verifying Checksum", > "a09ac8d7bbe3: Download complete", > "a09ac8d7bbe3: Pull complete", > "b6120cdcbaae: Pull complete", > "Digest: sha256:1ef8647baf764d22c9d74ca6aa1b0913a1385d3f2d497f6320d0f2a1a7e48177", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2", > "2018-08-20 10:26:18,097 DEBUG: 28338 -- NET_HOST enabled", > "2018-08-20 10:26:18,097 DEBUG: 28338 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova_placement --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config --env NAME=nova_placement --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp8cmqAU:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-placement-api:2018-08-17.2", > "2018-08-20 10:26:21,237 DEBUG: 28340 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-gnocchi-api ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-gnocchi-api", > "282a5344773c: Pulling fs layer", > "78750e81faf6: Pulling fs layer", > "282a5344773c: Waiting", > "78750e81faf6: Verifying Checksum", > "78750e81faf6: Download complete", > "282a5344773c: Verifying Checksum", > "282a5344773c: Download complete", > "282a5344773c: Pull complete", > "78750e81faf6: Pull complete", > "Digest: sha256:01b670df1a4e39d29ce3c89cdecd987397426e7e87a9cab6d5c058baf7fa6408", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", > "2018-08-20 10:26:21,240 DEBUG: 28340 -- NET_HOST enabled", > "2018-08-20 10:26:21,241 DEBUG: 28340 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-gnocchi --env PUPPET_TAGS=file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config --env NAME=gnocchi --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpy5fIza:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-gnocchi-api:2018-08-17.2", > "2018-08-20 10:26:28,214 DEBUG: 28339 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.10 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[fetch_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[extract_swift_ring_tarball]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Swift/File[/var/lib/swift]/group: group changed 'root' to 'swift'", > "Notice: /Stage[main]/Swift/File[/etc/swift/swift.conf]/owner: owner changed 'root' to 'swift'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[object]/Exec[create_object]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[account]/Exec[create_account]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Create[container]/Exec[create_container]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.12:%PORT%/d1]/Ring_object_device[172.17.4.12:6000/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.12:%PORT%/d1]/Ring_container_device[172.17.4.12:6001/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Tripleo::Profile::Base::Swift::Add_devices[r1z1-172.17.4.12:%PORT%/d1]/Ring_account_device[172.17.4.12:6002/d1]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[object]/Exec[rebalance_object]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[account]/Exec[rebalance_account]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Swift::Ringbuilder::Rebalance[container]/Exec[rebalance_container]: Triggered 'refresh' from 1 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[create_swift_ring_tarball]: Triggered 'refresh' from 3 events", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Ringbuilder/Exec[upload_swift_ring_tarball]: Triggered 'refresh' from 2 events", > "Notice: Applied catalog in 4.39 seconds", > "Changes:", > " Total: 11", > "Events:", > " Success: 11", > "Resources:", > " Changed: 11", > " Out of sync: 11", > " Skipped: 19", > " Total: 36", > " Restarted: 6", > "Time:", > " File: 0.00", > " Ring account device: 0.49", > " Ring container device: 0.56", > " Ring object device: 0.63", > " Config retrieval: 1.24", > " Exec: 1.38", > " Last run: 1534760787", > " Total: 4.31", > "Version:", > " Config: 1534760781", > " Puppet: 4.8.2", > "Gathering files modified after 2018-08-20 10:26:13.471821695 +0000", > "2018-08-20 10:26:28,214 DEBUG: 28339 -- + mkdir -p /etc/puppet", > "+ cp -a /tmp/puppet-etc/auth.conf /tmp/puppet-etc/hiera.yaml /tmp/puppet-etc/hieradata /tmp/puppet-etc/modules /tmp/puppet-etc/puppet.conf /tmp/puppet-etc/ssl /etc/puppet", > "+ rm -Rf /etc/puppet/ssl", > "+ echo '{\"step\": 6}'", > "+ TAGS=", > "+ '[' -n file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball'", > "+ origin_of_time=/var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ touch /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ sync", > "+ set +e", > "+ FACTER_hostname=controller-0", > "+ FACTER_uuid=docker", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec,fetch_swift_ring_tarball,extract_swift_ring_tarball,ring_object_device,swift::ringbuilder::create,tripleo::profile::base::swift::add_devices,swift::ringbuilder::rebalance,create_swift_ring_tarball,upload_swift_ring_tarball /etc/config.pp", > "Warning: Facter: Could not retrieve fact='nic_alias', resolution='<anonymous>': Could not execute '/usr/bin/os-net-config -i': command not found", > "Warning: Undefined variable 'deploy_config_name'; ", > " (file & line not available)", > "Warning: ModuleLoader: module 'swift' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: This method is deprecated, please use the stdlib validate_legacy function,", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/ringbuilder.pp\", 113]:[\"/etc/config.pp\", 2]", > " (at /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:28:in `deprecation')", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/ringbuilder/create.pp\", 44]:", > "Warning: Unexpected line: Ring file /etc/swift/object.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta", > "Warning: Unexpected line: There are no devices in this ring, or all devices have been deleted", > "Warning: Unexpected line: Ring file /etc/swift/container.ring.gz not found, probably it hasn't been written yet", > "Warning: Unexpected line: Ring file /etc/swift/account.ring.gz not found, probably it hasn't been written yet", > "+ rc=2", > "+ set -e", > "+ '[' 2 -ne 2 -a 2 -ne 0 ']'", > "+ '[' -z '' ']'", > "+ archivedirs=(\"/etc\" \"/root\" \"/opt\" \"/var/lib/ironic/tftpboot\" \"/var/lib/ironic/httpboot\" \"/var/www\" \"/var/spool/cron\" \"/var/lib/nova/.ssh\")", > "+ rsync_srcs=", > "+ for d in '\"${archivedirs[@]}\"'", > "+ '[' -d /etc ']'", > "+ rsync_srcs+=' /etc'", > "+ '[' -d /root ']'", > "+ rsync_srcs+=' /root'", > "+ '[' -d /opt ']'", > "+ rsync_srcs+=' /opt'", > "+ '[' -d /var/lib/ironic/tftpboot ']'", > "+ '[' -d /var/lib/ironic/httpboot ']'", > "+ '[' -d /var/www ']'", > "+ rsync_srcs+=' /var/www'", > "+ '[' -d /var/spool/cron ']'", > "+ rsync_srcs+=' /var/spool/cron'", > "+ '[' -d /var/lib/nova/.ssh ']'", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift_ringbuilder", > "++ stat -c %y /var/lib/config-data/swift_ringbuilder.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:13.471821695 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift_ringbuilder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift_ringbuilder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift_ringbuilder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift_ringbuilder --mtime=1970-01-01", > "+ md5sum", > "+ awk '{print $1}'", > "tar: Removing leading `/' from member names", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift_ringbuilder --mtime=1970-01-01", > "2018-08-20 10:26:28,214 INFO: 28339 -- Removing container: docker-puppet-swift_ringbuilder", > "2018-08-20 10:26:28,268 DEBUG: 28339 -- docker-puppet-swift_ringbuilder", > "2018-08-20 10:26:28,268 INFO: 28339 -- Finished processing puppet configs for swift_ringbuilder", > "2018-08-20 10:26:28,269 INFO: 28339 -- Starting configuration of sahara using image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2", > "2018-08-20 10:26:28,269 DEBUG: 28339 -- config_volume sahara", > "2018-08-20 10:26:28,269 DEBUG: 28339 -- puppet_tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template", > "2018-08-20 10:26:28,269 DEBUG: 28339 -- manifest include ::tripleo::profile::base::sahara::api", > "include ::tripleo::profile::base::sahara::engine", > "2018-08-20 10:26:28,269 DEBUG: 28339 -- config_image 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2", > "2018-08-20 10:26:28,269 DEBUG: 28339 -- volumes []", > "2018-08-20 10:26:28,270 INFO: 28339 -- Removing container: docker-puppet-sahara", > "2018-08-20 10:26:28,329 INFO: 28339 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2", > "2018-08-20 10:26:30,618 DEBUG: 28339 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-sahara-api ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-sahara-api", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "b0b426385936: Already exists", > "bfd71860b3fc: Already exists", > "ca7c84c1d074: Pulling fs layer", > "4d87a337920b: Pulling fs layer", > "4d87a337920b: Verifying Checksum", > "4d87a337920b: Download complete", > "ca7c84c1d074: Download complete", > "ca7c84c1d074: Pull complete", > "4d87a337920b: Pull complete", > "Digest: sha256:435e27445560e8a06979f3ac19b65b342aa0eeec602aec12d14c46e9eba70618", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2", > "2018-08-20 10:26:30,621 DEBUG: 28339 -- NET_HOST enabled", > "2018-08-20 10:26:30,621 DEBUG: 28339 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-sahara --env PUPPET_TAGS=file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template --env NAME=sahara --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmphCrsIj:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-sahara-api:2018-08-17.2", > "2018-08-20 10:26:34,957 DEBUG: 28340 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.92 seconds", > "Notice: /Stage[main]/Apache::Mod::Mime/File[mime.conf]/ensure: defined content as '{md5}9da85e58f3bd6c780ce76db603b7f028'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/File[mime_magic.conf]/ensure: defined content as '{md5}b258529b332429e2ff8344f726a95457'", > "Notice: /Stage[main]/Apache::Mod::Alias/File[alias.conf]/ensure: defined content as '{md5}983e865be85f5e0daaed7433db82995e'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/File[autoindex.conf]/ensure: defined content as '{md5}2421a3c6df32c7e38c2a7a22afdf5728'", > "Notice: /Stage[main]/Apache::Mod::Deflate/File[deflate.conf]/ensure: defined content as '{md5}a045d750d819b1e9dae3fbfb3f20edd5'", > "Notice: /Stage[main]/Apache::Mod::Dir/File[dir.conf]/ensure: defined content as '{md5}c741d8ea840e6eb999d739eed47c69d7'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/File[negotiation.conf]/ensure: defined content as '{md5}47284b5580b986a6ba32580b6ffb9fd7'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/File[setenvif.conf]/ensure: defined content as '{md5}c7ede4173da1915b7ec088201f030c28'", > "Notice: /Stage[main]/Apache::Mod::Prefork/File[/etc/httpd/conf.modules.d/prefork.conf]/ensure: defined content as '{md5}f58b0483b70b4e73b5f67ff37b8f24a0'", > "Notice: /Stage[main]/Apache::Mod::Status/File[status.conf]/ensure: defined content as '{md5}fa95c477a2085c1f7f17ee5f8eccfb90'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully", > "Notice: /Stage[main]/Gnocchi::Db/Gnocchi_config[indexer/url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Gnocchi_config[api/auth_mode]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage/Gnocchi_config[storage/coordination_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Incoming::Redis/Gnocchi_config[incoming/redis_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/driver]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_keyring]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_pool]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Storage::Ceph/Gnocchi_config[storage/ceph_conffile]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/workers]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Metricd/Gnocchi_config[metricd/metric_processing_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/resource_id]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/archive_policy_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Statsd/Gnocchi_config[statsd/flush_delay]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Logging/Oslo::Log[gnocchi_config]/Gnocchi_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Policy/Oslo::Policy[gnocchi_config]/Gnocchi_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Api/Oslo::Middleware[gnocchi_config]/Gnocchi_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Keystone::Authtoken/Keystone::Resource::Authtoken[gnocchi_config]/Gnocchi_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}3cb292a5545de9f30e5168d05f41a649'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf/httpd.conf]/content: content changed '{md5}c6d1bc1fdbcb93bbd2596e4703f4108c' to '{md5}3bd0015a5b258bebc53d757643b45830'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[log_config]/File[log_config.load]/ensure: defined content as '{md5}785d35cb285e190d589163b45263ca89'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[systemd]/File[systemd.load]/ensure: defined content as '{md5}26e5d44aae258b3e9d821cbbbd3e2826'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[unixd]/File[unixd.load]/ensure: defined content as '{md5}0e8468ecc1265f8947b8725f4d1be9c0'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_host]/File[authz_host.load]/ensure: defined content as '{md5}d1045f54d2798499ca0f030ca0eef920'", > "Notice: /Stage[main]/Apache::Mod::Actions/Apache::Mod[actions]/File[actions.load]/ensure: defined content as '{md5}599866dfaf734f60f7e2d41ee8235515'", > "Notice: /Stage[main]/Apache::Mod::Authn_core/Apache::Mod[authn_core]/File[authn_core.load]/ensure: defined content as '{md5}704d6e8b02b0eca0eba4083960d16c52'", > "Notice: /Stage[main]/Apache::Mod::Cache/Apache::Mod[cache]/File[cache.load]/ensure: defined content as '{md5}01e4d392225b518a65b0f7d6c4e21d29'", > "Notice: /Stage[main]/Apache::Mod::Ext_filter/Apache::Mod[ext_filter]/File[ext_filter.load]/ensure: defined content as '{md5}76d5e0ac3411a4be57ac33ebe2e52ac8'", > "Notice: /Stage[main]/Apache::Mod::Mime/Apache::Mod[mime]/File[mime.load]/ensure: defined content as '{md5}e36257b9efab01459141d423cae57c7c'", > "Notice: /Stage[main]/Apache::Mod::Mime_magic/Apache::Mod[mime_magic]/File[mime_magic.load]/ensure: defined content as '{md5}cb8670bb2fb352aac7ebf3a85d52094c'", > "Notice: /Stage[main]/Apache::Mod::Rewrite/Apache::Mod[rewrite]/File[rewrite.load]/ensure: defined content as '{md5}26e2683352fc1599f29573ff0d934e79'", > "Notice: /Stage[main]/Apache::Mod::Speling/Apache::Mod[speling]/File[speling.load]/ensure: defined content as '{md5}f82e9e6b871a276c324c9eeffcec8a61'", > "Notice: /Stage[main]/Apache::Mod::Suexec/Apache::Mod[suexec]/File[suexec.load]/ensure: defined content as '{md5}c7d5c61c534ba423a79b0ae78ff9be35'", > "Notice: /Stage[main]/Apache::Mod::Version/Apache::Mod[version]/File[version.load]/ensure: defined content as '{md5}1c9243de22ace4dc8266442c48ae0c92'", > "Notice: /Stage[main]/Apache::Mod::Vhost_alias/Apache::Mod[vhost_alias]/File[vhost_alias.load]/ensure: defined content as '{md5}eca907865997d50d5130497665c3f82e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_digest]/File[auth_digest.load]/ensure: defined content as '{md5}df9e85f8da0b239fe8e698ae7ead4f60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_anon]/File[authn_anon.load]/ensure: defined content as '{md5}bf57b94b5aec35476fc2a2dc3861f132'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authn_dbm]/File[authn_dbm.load]/ensure: defined content as '{md5}90ee8f8ef1a017cacadfda4225e10651'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_dbm]/File[authz_dbm.load]/ensure: defined content as '{md5}c1363277984d22f99b70f7dce8753b60'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_owner]/File[authz_owner.load]/ensure: defined content as '{md5}f30a9be1016df87f195449d9e02d1857'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[expires]/File[expires.load]/ensure: defined content as '{md5}f0825bad1e470de86ffabeb86dcc5d95'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[include]/File[include.load]/ensure: defined content as '{md5}88095a914eedc3c2c184dd5d74c3954c'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[logio]/File[logio.load]/ensure: defined content as '{md5}084533c7a44e9129d0e6df952e2472b6'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[substitute]/File[substitute.load]/ensure: defined content as '{md5}8077c34a71afcf41c8fc644830935915'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[usertrack]/File[usertrack.load]/ensure: defined content as '{md5}e95fbbf030fabec98b948f8dc217775c'", > "Notice: /Stage[main]/Apache::Mod::Alias/Apache::Mod[alias]/File[alias.load]/ensure: defined content as '{md5}3cf2fa309ccae4c29a4b875d0894cd79'", > "Notice: /Stage[main]/Apache::Mod::Authn_file/Apache::Mod[authn_file]/File[authn_file.load]/ensure: defined content as '{md5}d41656680003d7b890267bb73621c60b'", > "Notice: /Stage[main]/Apache::Mod::Autoindex/Apache::Mod[autoindex]/File[autoindex.load]/ensure: defined content as '{md5}515cdf5b573e961a60d2931d39248648'", > "Notice: /Stage[main]/Apache::Mod::Dav/Apache::Mod[dav]/File[dav.load]/ensure: defined content as '{md5}588e496251838c4840c14b28b5aa7881'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/File[dav_fs.conf]/ensure: defined content as '{md5}899a57534f3d84efa81887ec93c90c9b'", > "Notice: /Stage[main]/Apache::Mod::Dav_fs/Apache::Mod[dav_fs]/File[dav_fs.load]/ensure: defined content as '{md5}2996277c73b1cd684a9a3111c355e0d3'", > "Notice: /Stage[main]/Apache::Mod::Deflate/Apache::Mod[deflate]/File[deflate.load]/ensure: defined content as '{md5}2d1a1afcae0c70557251829a8586eeaf'", > "Notice: /Stage[main]/Apache::Mod::Dir/Apache::Mod[dir]/File[dir.load]/ensure: defined content as '{md5}1bfb1c2a46d7351fc9eb47c659dee068'", > "Notice: /Stage[main]/Apache::Mod::Negotiation/Apache::Mod[negotiation]/File[negotiation.load]/ensure: defined content as '{md5}d262ee6a5f20d9dd7f87770638dc2ccd'", > "Notice: /Stage[main]/Apache::Mod::Setenvif/Apache::Mod[setenvif]/File[setenvif.load]/ensure: defined content as '{md5}ec6c99f7cc8e35bdbcf8028f652c9f6d'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[auth_basic]/File[auth_basic.load]/ensure: defined content as '{md5}494bcf4b843f7908675d663d8dc1bdc8'", > "Notice: /Stage[main]/Apache::Mod::Filter/Apache::Mod[filter]/File[filter.load]/ensure: defined content as '{md5}66a1e2064a140c3e7dca7ac33877700e'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_core]/File[authz_core.load]/ensure: defined content as '{md5}39942569bff2abdb259f9a347c7246bc'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[access_compat]/File[access_compat.load]/ensure: defined content as '{md5}d5feb88bec4570e2dbc41cce7e0de003'", > "Notice: /Stage[main]/Apache::Mod::Authz_user/Apache::Mod[authz_user]/File[authz_user.load]/ensure: defined content as '{md5}63594303ee808423679b1ea13dd5a784'", > "Notice: /Stage[main]/Apache::Default_mods/Apache::Mod[authz_groupfile]/File[authz_groupfile.load]/ensure: defined content as '{md5}ae005a36b3ac8c20af36c434561c8a75'", > "Notice: /Stage[main]/Apache::Mod::Env/Apache::Mod[env]/File[env.load]/ensure: defined content as '{md5}d74184d40d0ee24ba02626a188ee7e1a'", > "Notice: /Stage[main]/Apache::Mod::Prefork/Apache::Mpm[prefork]/File[/etc/httpd/conf.modules.d/prefork.load]/ensure: defined content as '{md5}157529aafcf03fa491bc924103e4608e'", > "Notice: /Stage[main]/Apache::Mod::Cgi/Apache::Mod[cgi]/File[cgi.load]/ensure: defined content as '{md5}ac20c5c5779b37ab06b480d6485a0881'", > "Notice: /Stage[main]/Apache::Mod::Status/Apache::Mod[status]/File[status.load]/ensure: defined content as '{md5}c7726ef20347ef9a06ef68eeaad79765'", > "Notice: /Stage[main]/Apache::Mod::Ssl/Apache::Mod[ssl]/File[ssl.load]/ensure: defined content as '{md5}e282ac9f82fe5538692a4de3616fb695'", > "Notice: /Stage[main]/Apache::Mod::Socache_shmcb/Apache::Mod[socache_shmcb]/File[socache_shmcb.load]/ensure: defined content as '{md5}ab31a6ea611785f74851b578572e4157'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Apache/Systemd::Dropin_file[httpd.conf]/File[/etc/systemd/system/httpd.service.d/httpd.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/README]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/autoindex.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/userdir.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/welcome.conf]/ensure: removed", > "Notice: /Stage[main]/Apache::Mod::Ssl/File[ssl.conf]/content: content changed '{md5}9e163ce201541f8aa36fcc1a372ed34d' to '{md5}b6f6f2773db25c777f1db887e7a3f57d'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/File[wsgi.conf]/ensure: defined content as '{md5}8b3feb3fc2563de439920bb2c52cbd11'", > "Notice: /Stage[main]/Apache::Mod::Wsgi/Apache::Mod[wsgi]/File[wsgi.load]/ensure: defined content as '{md5}e1795e051e7aae1f865fde0d3b86a507'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-base.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-dav.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-lua.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-mpm.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-proxy.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-ssl.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/00-systemd.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/01-cgi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-wsgi.conf]/ensure: removed", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[/var/www/cgi-bin/gnocchi]/ensure: created", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/File[gnocchi_wsgi]/ensure: defined content as '{md5}c03530dd30d25ec70b705e0c2f43df7a'", > "Notice: /Stage[main]/Gnocchi::Wsgi::Apache/Openstacklib::Wsgi::Apache[gnocchi_wsgi]/Apache::Vhost[gnocchi_wsgi]/Concat[10-gnocchi_wsgi.conf]/File[/etc/httpd/conf.d/10-gnocchi_wsgi.conf]/ensure: defined content as '{md5}1524f118b98bfea9814025b4dfb8fc4a'", > "Notice: Applied catalog in 1.18 seconds", > " Total: 110", > " Success: 110", > " Changed: 110", > " Out of sync: 110", > " Total: 261", > " Skipped: 43", > " Concat file: 0.00", > " Anchor: 0.00", > " Concat fragment: 0.00", > " Augeas: 0.02", > " Gnocchi config: 0.27", > " File: 0.30", > " Last run: 1534760793", > " Config retrieval: 4.48", > " Total: 5.08", > " Resources: 0.00", > " Config: 1534760787", > "Gathering files modified after 2018-08-20 10:26:21.458830264 +0000", > "2018-08-20 10:26:34,958 DEBUG: 28340 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config'", > "+ origin_of_time=/var/lib/config-data/gnocchi.origin_of_time", > "+ touch /var/lib/config-data/gnocchi.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,gnocchi_api_paste_ini,gnocchi_config,gnocchi_config,gnocchi_config /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/db.pp\", 26]:[\"/etc/puppet/modules/gnocchi/manifests/init.pp\", 54]", > "Warning: ModuleLoader: module 'mysql' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/gnocchi/manifests/config.pp\", 29]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/gnocchi.pp\", 31]", > "Warning: Scope(Class[Gnocchi::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: ModuleLoader: module 'openstacklib' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/gnocchi", > "++ stat -c %y /var/lib/config-data/gnocchi.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:21.458830264 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/gnocchi", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/gnocchi", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/gnocchi.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/gnocchi --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/gnocchi --mtime=1970-01-01", > "2018-08-20 10:26:34,958 INFO: 28340 -- Removing container: docker-puppet-gnocchi", > "2018-08-20 10:26:35,003 DEBUG: 28340 -- docker-puppet-gnocchi", > "2018-08-20 10:26:35,003 INFO: 28340 -- Finished processing puppet configs for gnocchi", > "2018-08-20 10:26:35,003 INFO: 28340 -- Starting configuration of clustercheck using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:26:35,003 DEBUG: 28340 -- config_volume clustercheck", > "2018-08-20 10:26:35,004 DEBUG: 28340 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-08-20 10:26:35,004 DEBUG: 28340 -- manifest include ::tripleo::profile::pacemaker::clustercheck", > "2018-08-20 10:26:35,004 DEBUG: 28340 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:26:35,004 DEBUG: 28340 -- volumes []", > "2018-08-20 10:26:35,005 INFO: 28340 -- Removing container: docker-puppet-clustercheck", > "2018-08-20 10:26:35,066 INFO: 28340 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:26:38,572 DEBUG: 28338 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.01 seconds", > "Notice: /Stage[main]/Nova::Db/Nova_config[api_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Nova_config[placement_database/connection]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[glance/api_servers]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[api/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/image_service]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[cinder/catalog_info]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[os_vif_linux_bridge/use_ipv6]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_api_faults]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created", > "Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Placement/Nova_config[placement/os_interface]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created", > "Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Db/Oslo::Db[nova_config]/Nova_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/memcached_servers]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Keystone::Authtoken/Keystone::Resource::Authtoken[nova_config]/Nova_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}37ed0de7c9ebb4682f22584b78bf1bc4'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/File[/etc/httpd/conf.d/00-nova-placement-api.conf]/content: content changed '{md5}611e31d39e1635bfabc0aafc51b43d0b' to '{md5}612d455490cfecc4b51db6656ea39240'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/File[placement_wsgi]/ensure: defined content as '{md5}2c992c50344eb1765282cb9fb70126db'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_placement/Openstacklib::Wsgi::Apache[placement_wsgi]/Apache::Vhost[placement_wsgi]/Concat[10-placement_wsgi.conf]/File[/etc/httpd/conf.d/10-placement_wsgi.conf]/ensure: defined content as '{md5}0736aa6e5e26bedfe11b9ef7e39d7b59'", > "Notice: Applied catalog in 7.17 seconds", > " Total: 133", > " Success: 133", > " Changed: 133", > " Out of sync: 133", > " Total: 375", > " Skipped: 39", > " Package: 0.09", > " File: 0.48", > " Total: 11.16", > " Last run: 1534760796", > " Config retrieval: 4.53", > " Nova config: 6.03", > " Config: 1534760784", > "Gathering files modified after 2018-08-20 10:26:18.296826872 +0000", > "2018-08-20 10:26:38,572 DEBUG: 28338 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova_placement.origin_of_time", > "+ touch /var/lib/config-data/nova_placement.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config /etc/config.pp", > "ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > "Warning: ModuleLoader: module 'nova' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/config.pp\", 37]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 114]", > "Warning: Scope(Class[Nova::Db]): placement_database_connection has no effect as of pike, and may be removed in a future release", > "Warning: Scope(Class[Nova::Db]): placement_slave_connection has no effect as of pike, and may be removed in a future release", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/db.pp\", 126]:[\"/etc/puppet/modules/nova/manifests/init.pp\", 545]", > "Warning: Scope(Class[Nova]): nova::use_syslog, nova::use_stderr, nova::log_facility, nova::log_dir \\", > "and nova::debug is deprecated and has been moved to nova::logging class, please set them there.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 555]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/placement.pp\", 62]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/placement.pp\", 101]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 138]", > "Warning: Scope(Class[Nova::Placement]): The os_region_name parameter is deprecated and will be removed \\", > "in a future release. Please use region_name instead.", > "Warning: Scope(Class[Nova::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova_placement", > "++ stat -c %y /var/lib/config-data/nova_placement.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:18.296826872 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova_placement", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova_placement", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova_placement.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova_placement --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova_placement --mtime=1970-01-01", > "2018-08-20 10:26:38,572 INFO: 28338 -- Removing container: docker-puppet-nova_placement", > "2018-08-20 10:26:38,627 DEBUG: 28338 -- docker-puppet-nova_placement", > "2018-08-20 10:26:38,627 INFO: 28338 -- Finished processing puppet configs for nova_placement", > "2018-08-20 10:26:38,628 INFO: 28338 -- Starting configuration of aodh using image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", > "2018-08-20 10:26:38,628 DEBUG: 28338 -- config_volume aodh", > "2018-08-20 10:26:38,628 DEBUG: 28338 -- puppet_tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config", > "2018-08-20 10:26:38,628 DEBUG: 28338 -- manifest include tripleo::profile::base::aodh::api", > "include tripleo::profile::base::aodh::evaluator", > "include tripleo::profile::base::aodh::listener", > "include tripleo::profile::base::aodh::notifier", > "2018-08-20 10:26:38,628 DEBUG: 28338 -- config_image 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", > "2018-08-20 10:26:38,628 DEBUG: 28338 -- volumes []", > "2018-08-20 10:26:38,630 INFO: 28338 -- Removing container: docker-puppet-aodh", > "2018-08-20 10:26:38,700 INFO: 28338 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", > "2018-08-20 10:26:40,808 DEBUG: 28338 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-aodh-api ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-aodh-api", > "d9fda70e439b: Pulling fs layer", > "9ef65032a2c1: Pulling fs layer", > "9ef65032a2c1: Verifying Checksum", > "9ef65032a2c1: Download complete", > "d9fda70e439b: Verifying Checksum", > "d9fda70e439b: Download complete", > "d9fda70e439b: Pull complete", > "9ef65032a2c1: Pull complete", > "Digest: sha256:d5ceac7f9b182880d3a8d011285dd7c3cefd6a2361d284382a5255891496e4eb", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", > "2018-08-20 10:26:40,811 DEBUG: 28338 -- NET_HOST enabled", > "2018-08-20 10:26:40,812 DEBUG: 28338 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-aodh --env PUPPET_TAGS=file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config --env NAME=aodh --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpCNSCWJ:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-aodh-api:2018-08-17.2", > "2018-08-20 10:26:41,560 DEBUG: 28340 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-mariadb ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-mariadb", > "7e179cf1d5ed: Pulling fs layer", > "7e179cf1d5ed: Verifying Checksum", > "7e179cf1d5ed: Download complete", > "7e179cf1d5ed: Pull complete", > "Digest: sha256:26fa9d8bd397751889516ab60e50f6144a6a0c79e16149140495f730e74d687c", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:26:41,563 DEBUG: 28340 -- NET_HOST enabled", > "2018-08-20 10:26:41,563 DEBUG: 28340 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-clustercheck --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=clustercheck --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmplOdW6o:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:26:42,547 DEBUG: 28339 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.28 seconds", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/plugins]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Sahara/Sahara_config[DEFAULT/port]/ensure: created", > "Notice: /Stage[main]/Sahara::Service::Api/Sahara_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Sahara::Logging/Oslo::Log[sahara_config]/Sahara_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Db/Oslo::Db[sahara_config]/Sahara_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Sahara::Policy/Oslo::Policy[sahara_config]/Sahara_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Sahara::Keystone::Authtoken/Keystone::Resource::Authtoken[sahara_config]/Sahara_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Default[sahara_config]/Sahara_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Rabbit[sahara_config]/Sahara_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Sahara/Oslo::Messaging::Zmq[sahara_config]/Sahara_config[DEFAULT/rpc_zmq_host]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Sahara::Notify/Oslo::Messaging::Notifications[sahara_config]/Sahara_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 1.60 seconds", > " Total: 25", > " Success: 25", > " Total: 197", > " Skipped: 23", > " Out of sync: 25", > " Changed: 25", > " Augeas: 0.03", > " Package: 0.05", > " Sahara config: 1.11", > " Last run: 1534760801", > " Config retrieval: 2.64", > " Total: 3.82", > " Config: 1534760797", > "Gathering files modified after 2018-08-20 10:26:30.855839410 +0000", > "2018-08-20 10:26:42,547 DEBUG: 28339 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template'", > "+ origin_of_time=/var/lib/config-data/sahara.origin_of_time", > "+ touch /var/lib/config-data/sahara.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,sahara_api_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template,sahara_engine_paste_ini,sahara_cluster_template,sahara_config,sahara_node_group_template /etc/config.pp", > "Warning: ModuleLoader: module 'sahara' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/db.pp\", 69]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 380]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/sahara/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/sahara/manifests/init.pp\", 381]", > "Warning: Scope(Class[Sahara]): The use_neutron parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Sahara]): sahara::admin_user, sahara::admin_password, sahara::auth_uri, sahara::identity_uri, sahara::admin_tenant_name and sahara::memcached_servers are deprecated. Please use sahara::keystone::authtoken::* parameters instead.", > "Warning: Scope(Class[Sahara::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/sahara", > "++ stat -c %y /var/lib/config-data/sahara.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:30.855839410 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/sahara", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/sahara", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/sahara.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/sahara --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/sahara --mtime=1970-01-01", > "2018-08-20 10:26:42,547 INFO: 28339 -- Removing container: docker-puppet-sahara", > "2018-08-20 10:26:42,584 DEBUG: 28339 -- docker-puppet-sahara", > "2018-08-20 10:26:42,585 INFO: 28339 -- Finished processing puppet configs for sahara", > "2018-08-20 10:26:42,585 INFO: 28339 -- Starting configuration of mysql using image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:26:42,585 DEBUG: 28339 -- config_volume mysql", > "2018-08-20 10:26:42,585 DEBUG: 28339 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-08-20 10:26:42,585 DEBUG: 28339 -- manifest ['Mysql_datadir', 'Mysql_user', 'Mysql_database', 'Mysql_grant', 'Mysql_plugin'].each |String $val| { noop_resource($val) }", > "2018-08-20 10:26:42,585 DEBUG: 28339 -- config_image 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:26:42,585 DEBUG: 28339 -- volumes []", > "2018-08-20 10:26:42,586 INFO: 28339 -- Removing container: docker-puppet-mysql", > "2018-08-20 10:26:42,635 INFO: 28339 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:26:42,638 DEBUG: 28339 -- NET_HOST enabled", > "2018-08-20 10:26:42,638 DEBUG: 28339 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-mysql --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=mysql --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpd2aWL_:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-mariadb:2018-08-17.2", > "2018-08-20 10:26:49,296 DEBUG: 28340 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.44 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}fd280253404c973169376755368f4221'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/content: content changed '{md5}9ff8cc688dd9f0dfc45e5afd25c427a7' to '{md5}7d37008224e71625019cb48768f267e7'", > "Notice: /Stage[main]/Xinetd/File[/etc/xinetd.conf]/mode: mode changed '0600' to '0644'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Clustercheck/Xinetd::Service[galera-monitor]/File[/etc/xinetd.d/galera-monitor]/ensure: defined content as '{md5}3afdef3c0450b1869412e40a88f2bfb2'", > "Notice: Applied catalog in 0.04 seconds", > " Total: 4", > " Success: 4", > " Total: 13", > " Out of sync: 3", > " Changed: 3", > " Skipped: 9", > " File: 0.02", > " Config retrieval: 0.59", > " Total: 0.61", > " Last run: 1534760808", > " Config: 1534760807", > "Gathering files modified after 2018-08-20 10:26:41.760849809 +0000", > "2018-08-20 10:26:49,297 DEBUG: 28340 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,file ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,file'", > "+ origin_of_time=/var/lib/config-data/clustercheck.origin_of_time", > "+ touch /var/lib/config-data/clustercheck.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,file /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/clustercheck", > "++ stat -c %y /var/lib/config-data/clustercheck.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:41.760849809 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/clustercheck", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/clustercheck", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/clustercheck.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/clustercheck --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/clustercheck --mtime=1970-01-01", > "2018-08-20 10:26:49,297 INFO: 28340 -- Removing container: docker-puppet-clustercheck", > "2018-08-20 10:26:49,331 DEBUG: 28340 -- docker-puppet-clustercheck", > "2018-08-20 10:26:49,331 INFO: 28340 -- Finished processing puppet configs for clustercheck", > "2018-08-20 10:26:49,332 INFO: 28340 -- Starting configuration of redis using image 192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2", > "2018-08-20 10:26:49,332 DEBUG: 28340 -- config_volume redis", > "2018-08-20 10:26:49,332 DEBUG: 28340 -- puppet_tags file,file_line,concat,augeas,cron,exec", > "2018-08-20 10:26:49,332 DEBUG: 28340 -- manifest include ::tripleo::profile::pacemaker::database::redis_bundle", > "2018-08-20 10:26:49,332 DEBUG: 28340 -- config_image 192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2", > "2018-08-20 10:26:49,332 DEBUG: 28340 -- volumes []", > "2018-08-20 10:26:49,333 INFO: 28340 -- Removing container: docker-puppet-redis", > "2018-08-20 10:26:49,398 INFO: 28340 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2", > "2018-08-20 10:26:53,035 DEBUG: 28340 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-redis ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-redis", > "5c96d1d58fe2: Pulling fs layer", > "d3a7242f9dc4: Pulling fs layer", > "5c96d1d58fe2: Verifying Checksum", > "5c96d1d58fe2: Download complete", > "5c96d1d58fe2: Pull complete", > "d3a7242f9dc4: Verifying Checksum", > "d3a7242f9dc4: Download complete", > "d3a7242f9dc4: Pull complete", > "Digest: sha256:0104b6c01fd71f565e5ec4bfed9f06543a8f25e81ca09630e1895946e173ef76", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2", > "2018-08-20 10:26:53,038 DEBUG: 28340 -- NET_HOST enabled", > "2018-08-20 10:26:53,038 DEBUG: 28340 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-redis --env PUPPET_TAGS=file,file_line,concat,augeas,cron,exec --env NAME=redis --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp2ycj6X:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-redis:2018-08-17.2", > "2018-08-20 10:26:54,910 DEBUG: 28339 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.26 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/root/.my.cnf]/ensure: defined content as '{md5}8c3a52f8f9c495395eebfa195e380a49'", > "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Database::Mysql_bundle/File[/etc/sysconfig/clustercheck]/ensure: defined content as '{md5}c2c9544401001f240cb75a05cf6d2cfa'", > "Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: content changed '{md5}af90358207ccfecae7af249d5ef7dd3e' to '{md5}da920df6baf6c7424ed796c11086927e'", > "Notice: /Stage[main]/Mysql::Server::Installdb/File[/var/log/mariadb/mariadb.log]/ensure: created", > "Notice: Applied catalog in 0.36 seconds", > " Skipped: 225", > " Total: 230", > " Out of sync: 4", > " Changed: 4", > " File: 0.03", > " Last run: 1534760813", > " Config retrieval: 4.68", > " Total: 4.70", > " Config: 1534760808", > "Gathering files modified after 2018-08-20 10:26:42.834850834 +0000", > "2018-08-20 10:26:54,910 DEBUG: 28339 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/mysql.origin_of_time", > "+ touch /var/lib/config-data/mysql.origin_of_time", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/pacemaker/database/mysql_bundle.pp\", 133]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 103]:[\"/etc/config.pp\", 4]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/db/mysql.pp\", 57]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/database/mysql.pp\", 175]", > "Warning: ModuleLoader: module 'cinder' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/openstacklib/manifests/db/mysql/host_access.pp\", 43]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/mysql", > "++ stat -c %y /var/lib/config-data/mysql.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:42.834850834 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/mysql", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/mysql", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/mysql.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/mysql --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/mysql --mtime=1970-01-01", > "2018-08-20 10:26:54,910 INFO: 28339 -- Removing container: docker-puppet-mysql", > "2018-08-20 10:26:54,947 DEBUG: 28339 -- docker-puppet-mysql", > "2018-08-20 10:26:54,947 INFO: 28339 -- Finished processing puppet configs for mysql", > "2018-08-20 10:26:54,947 INFO: 28339 -- Starting configuration of nova using image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", > "2018-08-20 10:26:54,948 DEBUG: 28339 -- config_volume nova", > "2018-08-20 10:26:54,948 DEBUG: 28339 -- puppet_tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config", > "2018-08-20 10:26:54,948 DEBUG: 28339 -- manifest ['Nova_cell_v2'].each |String $val| { noop_resource($val) }", > "include tripleo::profile::base::nova::conductor", > "include tripleo::profile::base::nova::consoleauth", > "include tripleo::profile::base::nova::scheduler", > "include tripleo::profile::base::nova::vncproxy", > "2018-08-20 10:26:54,948 DEBUG: 28339 -- config_image 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", > "2018-08-20 10:26:54,948 DEBUG: 28339 -- volumes []", > "2018-08-20 10:26:54,949 INFO: 28339 -- Removing container: docker-puppet-nova", > "2018-08-20 10:26:55,010 INFO: 28339 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", > "2018-08-20 10:26:55,893 DEBUG: 28338 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.13 seconds", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Aodh::Auth/Aodh_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Aodh_config[api/gnocchi_external_project_owner]/ensure: created", > "Notice: /Stage[main]/Aodh::Evaluator/Aodh_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Db/Oslo::Db[aodh_config]/Aodh_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Aodh::Logging/Oslo::Log[aodh_config]/Aodh_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Rabbit[aodh_config]/Aodh_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Default[aodh_config]/Aodh_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Aodh/Oslo::Messaging::Notifications[aodh_config]/Aodh_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Policy/Oslo::Policy[aodh_config]/Aodh_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Keystone::Authtoken/Keystone::Resource::Authtoken[aodh_config]/Aodh_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Aodh::Api/Oslo::Middleware[aodh_config]/Aodh_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}fc316e9d923e3a94945cfb8c64307e1d'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/owner: owner changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[/var/www/cgi-bin/aodh]/group: group changed 'root' to 'aodh'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/File[aodh_wsgi]/ensure: defined content as '{md5}09d823939c45501c11f2096289fe70cf'", > "Notice: /Stage[main]/Aodh::Wsgi::Apache/Openstacklib::Wsgi::Apache[aodh_wsgi]/Apache::Vhost[aodh_wsgi]/Concat[10-aodh_wsgi.conf]/File[/etc/httpd/conf.d/10-aodh_wsgi.conf]/ensure: defined content as '{md5}3a5e55367f0144775f4f683dd00c98a7'", > "Notice: Applied catalog in 1.92 seconds", > " Changed: 109", > " Out of sync: 109", > " Total: 329", > " Skipped: 40", > " File: 0.34", > " Aodh config: 1.01", > " Last run: 1534760814", > " Config retrieval: 4.78", > " Total: 6.20", > "Gathering files modified after 2018-08-20 10:26:41.016849100 +0000", > "2018-08-20 10:26:55,893 DEBUG: 28338 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config'", > "+ origin_of_time=/var/lib/config-data/aodh.origin_of_time", > "+ touch /var/lib/config-data/aodh.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,aodh_api_paste_ini,aodh_config,aodh_config,aodh_config,aodh_config /etc/config.pp", > "Warning: Unknown variable: 'undef'. at /etc/puppet/modules/aodh/manifests/init.pp:290:41", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/aodh/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/aodh.pp\", 123]", > "Warning: Scope(Class[Aodh::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Scope(Class[Aodh::Api]): host has no effect as of Newton and will be removed in a future \\", > "release. aodh::wsgi::apache supports setting a host via bind_host.", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/oslo/manifests/db.pp\", 132]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/aodh", > "++ stat -c %y /var/lib/config-data/aodh.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:41.016849100 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/aodh", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/aodh", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/aodh.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/aodh --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/aodh --mtime=1970-01-01", > "2018-08-20 10:26:55,893 INFO: 28338 -- Removing container: docker-puppet-aodh", > "2018-08-20 10:26:55,945 DEBUG: 28338 -- docker-puppet-aodh", > "2018-08-20 10:26:55,946 INFO: 28338 -- Finished processing puppet configs for aodh", > "2018-08-20 10:26:55,946 INFO: 28338 -- Starting configuration of heat_api using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:26:55,946 DEBUG: 28338 -- config_volume heat_api", > "2018-08-20 10:26:55,946 DEBUG: 28338 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-08-20 10:26:55,946 DEBUG: 28338 -- manifest include ::tripleo::profile::base::heat::api", > "2018-08-20 10:26:55,946 DEBUG: 28338 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:26:55,946 DEBUG: 28338 -- volumes []", > "2018-08-20 10:26:55,948 INFO: 28338 -- Removing container: docker-puppet-heat_api", > "2018-08-20 10:26:56,013 INFO: 28338 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:26:58,124 DEBUG: 28338 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api", > "c6b04bd8872f: Pulling fs layer", > "13eb9fa39cc7: Pulling fs layer", > "13eb9fa39cc7: Verifying Checksum", > "13eb9fa39cc7: Download complete", > "c6b04bd8872f: Verifying Checksum", > "c6b04bd8872f: Download complete", > "c6b04bd8872f: Pull complete", > "13eb9fa39cc7: Pull complete", > "Digest: sha256:0fe6d4fb53cfd75bca33b42757665967411f896efc6773aedca2a42c63107e94", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:26:58,127 DEBUG: 28338 -- NET_HOST enabled", > "2018-08-20 10:26:58,127 DEBUG: 28338 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmps2MP8D:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:26:58,230 DEBUG: 28339 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-nova-api ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-nova-api", > "a09ac8d7bbe3: Already exists", > "a6885acc2188: Pulling fs layer", > "a6885acc2188: Download complete", > "a6885acc2188: Pull complete", > "Digest: sha256:9cf2d5a5e8b40d1fbff3d78d4752c75c46b3874854adce60adb28779608a86e7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", > "2018-08-20 10:26:58,233 DEBUG: 28339 -- NET_HOST enabled", > "2018-08-20 10:26:58,233 DEBUG: 28339 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-nova --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config --env NAME=nova --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp73EPYp:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-nova-api:2018-08-17.2", > "2018-08-20 10:27:01,283 DEBUG: 28340 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.95 seconds", > "Notice: /Stage[main]/Redis::Config/File[/etc/redis]/ensure: created", > "Notice: /Stage[main]/Redis::Config/File[/var/log/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Config/File[/var/lib/redis]/mode: mode changed '0750' to '0755'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/security/limits.d/redis.conf]/ensure: defined content as '{md5}a2f723773964f5ea42b6c7c5d6b72208'", > "Notice: /Stage[main]/Redis::Ulimit/File[/etc/systemd/system/redis.service.d/limit.conf]/mode: mode changed '0644' to '0444'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/File[/etc/redis.conf.puppet]/ensure: defined content as '{md5}6d31d605e08855afcf8d376c27b58d66'", > "Notice: /Stage[main]/Redis::Config/Redis::Instance[default]/Exec[cp -p /etc/redis.conf.puppet /etc/redis.conf]: Triggered 'refresh' from 1 events", > "Notice: Applied catalog in 0.05 seconds", > " Total: 6", > " Success: 6", > " Restarted: 1", > " Skipped: 11", > " Total: 21", > " Out of sync: 6", > " Changed: 6", > " Exec: 0.00", > " Augeas: 0.01", > " File: 0.01", > " Config retrieval: 1.08", > " Total: 1.10", > " Last run: 1534760820", > " Config: 1534760819", > "Gathering files modified after 2018-08-20 10:26:53.240860758 +0000", > "2018-08-20 10:27:01,283 DEBUG: 28340 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,exec ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,exec'", > "+ origin_of_time=/var/lib/config-data/redis.origin_of_time", > "+ touch /var/lib/config-data/redis.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,exec /etc/config.pp", > "Warning: ModuleLoader: module 'redis' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/redis", > "++ stat -c %y /var/lib/config-data/redis.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:53.240860758 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/redis", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/redis", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/redis.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/redis --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/redis --mtime=1970-01-01", > "2018-08-20 10:27:01,283 INFO: 28340 -- Removing container: docker-puppet-redis", > "2018-08-20 10:27:01,325 DEBUG: 28340 -- docker-puppet-redis", > "2018-08-20 10:27:01,326 INFO: 28340 -- Finished processing puppet configs for redis", > "2018-08-20 10:27:01,326 INFO: 28340 -- Starting configuration of keystone using image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", > "2018-08-20 10:27:01,326 DEBUG: 28340 -- config_volume keystone", > "2018-08-20 10:27:01,326 DEBUG: 28340 -- puppet_tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config", > "2018-08-20 10:27:01,326 DEBUG: 28340 -- manifest ['Keystone_user', 'Keystone_endpoint', 'Keystone_domain', 'Keystone_tenant', 'Keystone_user_role', 'Keystone_role', 'Keystone_service'].each |String $val| { noop_resource($val) }", > "2018-08-20 10:27:01,326 DEBUG: 28340 -- config_image 192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", > "2018-08-20 10:27:01,326 DEBUG: 28340 -- volumes []", > "2018-08-20 10:27:01,328 INFO: 28340 -- Removing container: docker-puppet-keystone", > "2018-08-20 10:27:01,393 INFO: 28340 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", > "2018-08-20 10:27:03,789 DEBUG: 28340 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-keystone ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-keystone", > "744deba60e34: Pulling fs layer", > "139e4eb14664: Pulling fs layer", > "139e4eb14664: Verifying Checksum", > "139e4eb14664: Download complete", > "744deba60e34: Verifying Checksum", > "744deba60e34: Download complete", > "744deba60e34: Pull complete", > "139e4eb14664: Pull complete", > "Digest: sha256:50610fb3694f2a40e0e3b333c154da53d0adb739a9d79af1117e73c3c3d14aad", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", > "2018-08-20 10:27:03,792 DEBUG: 28340 -- NET_HOST enabled", > "2018-08-20 10:27:03,793 DEBUG: 28340 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-keystone --env PUPPET_TAGS=file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config --env NAME=keystone --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpBSegvn:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-keystone:2018-08-17.2", > "2018-08-20 10:27:12,956 DEBUG: 28338 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.43 seconds", > "Notice: /Stage[main]/Heat::Cron::Purge_deleted/Cron[heat-manage purge_deleted]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_domain_admin_password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Domain/Heat_config[DEFAULT/stack_user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/username]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/password]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[trustee/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[clients_keystone/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[DEFAULT/max_json_body_size]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[ec2authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/limit_iterators]/ensure: created", > "Notice: /Stage[main]/Heat/Heat_config[yaql/memory_quota]/ensure: created", > "Notice: /Stage[main]/Heat::Api/Heat_config[heat_api/bind_host]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Heat::Logging/Oslo::Log[heat_config]/Heat_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Db/Oslo::Db[heat_config]/Heat_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Heat::Keystone::Authtoken/Keystone::Resource::Authtoken[heat_config]/Heat_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Rabbit[heat_config]/Heat_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Notifications[heat_config]/Heat_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/rpc_response_timeout]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Messaging::Default[heat_config]/Heat_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Heat/Oslo::Middleware[heat_config]/Heat_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/expose_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/max_age]/ensure: created", > "Notice: /Stage[main]/Heat::Cors/Oslo::Cors[heat_config]/Heat_config[cors/allow_headers]/ensure: created", > "Notice: /Stage[main]/Heat::Policy/Oslo::Policy[heat_config]/Heat_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}0b4bad3c8a21111582786caceb3bc55a'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/File[heat_api_wsgi]/ensure: defined content as '{md5}640891728ce5d46ae40234228561597c'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api/Heat::Wsgi::Apache[api]/Openstacklib::Wsgi::Apache[heat_api_wsgi]/Apache::Vhost[heat_api_wsgi]/Concat[10-heat_api_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_wsgi.conf]/ensure: defined content as '{md5}e7b2b5d57d7b13197d33bbcc8ee73b93'", > "Notice: Applied catalog in 2.63 seconds", > " Total: 121", > " Success: 121", > " Changed: 121", > " Out of sync: 121", > " Skipped: 32", > " Total: 336", > " Cron: 0.01", > " File: 0.40", > " Heat config: 1.61", > " Last run: 1534760831", > " Config retrieval: 3.92", > " Total: 5.99", > " Config: 1534760824", > "Gathering files modified after 2018-08-20 10:26:58.350865589 +0000", > "2018-08-20 10:27:12,956 DEBUG: 28338 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,heat_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/heat_api.origin_of_time", > "+ touch /var/lib/config-data/heat_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/db.pp\", 75]:[\"/etc/puppet/modules/heat/manifests/init.pp\", 363]", > "Warning: Scope(Class[Heat::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/heat.pp\", 128]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api", > "++ stat -c %y /var/lib/config-data/heat_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:58.350865589 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api --mtime=1970-01-01", > "2018-08-20 10:27:12,956 INFO: 28338 -- Removing container: docker-puppet-heat_api", > "2018-08-20 10:27:13,000 DEBUG: 28338 -- docker-puppet-heat_api", > "2018-08-20 10:27:13,001 INFO: 28338 -- Finished processing puppet configs for heat_api", > "2018-08-20 10:27:13,001 INFO: 28338 -- Starting configuration of heat using image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:27:13,001 DEBUG: 28338 -- config_volume heat", > "2018-08-20 10:27:13,001 DEBUG: 28338 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-08-20 10:27:13,001 DEBUG: 28338 -- manifest include ::tripleo::profile::base::heat::engine", > "2018-08-20 10:27:13,001 DEBUG: 28338 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:27:13,002 DEBUG: 28338 -- volumes []", > "2018-08-20 10:27:13,003 INFO: 28338 -- Removing container: docker-puppet-heat", > "2018-08-20 10:27:13,052 INFO: 28338 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:27:13,055 DEBUG: 28338 -- NET_HOST enabled", > "2018-08-20 10:27:13,056 DEBUG: 28338 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmp36mnsi:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api:2018-08-17.2", > "2018-08-20 10:27:18,597 DEBUG: 28340 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.83 seconds", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_token]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_bind_host]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/public_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/admin_port]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/expiration]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[ssl/enable]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[catalog/template_file]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/provider]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[DEFAULT/notification_format]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/admin_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[eventlet_server/public_workers]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/0]/ensure: defined content as '{md5}aebfdd318564f1c96e229b981d7ba564'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/fernet-keys/1]/ensure: defined content as '{md5}b78efce2d0892b362d9665cf6bc121e2'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys]/ensure: created", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/0]/ensure: defined content as '{md5}821e0af0054d3d6ed6303e6eeeecbce2'", > "Notice: /Stage[main]/Keystone/File[/etc/keystone/credential-keys/1]/ensure: defined content as '{md5}bd5c35e5f4c5b1469a1634233df2109b'", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[token/revoke_by_id]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[fernet_tokens/max_active_keys]/ensure: created", > "Notice: /Stage[main]/Keystone/Keystone_config[credential/key_repository]/ensure: created", > "Notice: /Stage[main]/Keystone::Config/Keystone_config[ec2/driver]/ensure: created", > "Notice: /Stage[main]/Keystone::Cron::Token_flush/Cron[keystone-manage token_flush]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Keystone::Logging/Oslo::Log[keystone_config]/Keystone_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Keystone::Policy/Oslo::Policy[keystone_config]/Keystone_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone::Db/Oslo::Db[keystone_config]/Keystone_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Middleware[keystone_config]/Keystone_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Default[keystone_config]/Keystone_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Notifications[keystone_config]/Keystone_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Keystone/Oslo::Messaging::Rabbit[keystone_config]/Keystone_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}cc00268a09d5e1044c09b90ceab337ea'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/File[keystone_wsgi_main]/ensure: defined content as '{md5}072422f0d75777ed1783e6910b3ddc58'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/File[keystone_wsgi_admin]/ensure: defined content as '{md5}d6dda52b0e14d80a652ecf42686d3962'", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.d/auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_gssapi.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_mellon.conf]/ensure: removed", > "Notice: /Stage[main]/Apache/File[/etc/httpd/conf.modules.d/10-auth_openidc.conf]/ensure: removed", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_main]/Apache::Vhost[keystone_wsgi_main]/Concat[10-keystone_wsgi_main.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_main.conf]/ensure: defined content as '{md5}653272cb76fd2943463a866083dbbfde'", > "Notice: /Stage[main]/Keystone::Wsgi::Apache/Openstacklib::Wsgi::Apache[keystone_wsgi_admin]/Apache::Vhost[keystone_wsgi_admin]/Concat[10-keystone_wsgi_admin.conf]/File[/etc/httpd/conf.d/10-keystone_wsgi_admin.conf]/ensure: defined content as '{md5}f3648a02806a430f97a24c380c6a9710'", > "Notice: Applied catalog in 2.61 seconds", > " Total: 126", > " Success: 126", > " Changed: 126", > " Out of sync: 126", > " Total: 324", > " Skipped: 34", > " File: 0.50", > " Keystone config: 1.50", > " Last run: 1534760837", > " Config retrieval: 4.44", > " Total: 6.52", > " Config: 1534760830", > "Gathering files modified after 2018-08-20 10:27:03.995870300 +0000", > "2018-08-20 10:27:18,597 DEBUG: 28340 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config'", > "+ origin_of_time=/var/lib/config-data/keystone.origin_of_time", > "+ touch /var/lib/config-data/keystone.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,keystone_config,keystone_domain_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/policy.pp\", 34]:[\"/etc/puppet/modules/keystone/manifests/init.pp\", 757]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 760]:[\"/etc/config.pp\", 3]", > " with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/keystone/manifests/init.pp\", 1108]:[\"/etc/config.pp\", 3]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/keystone", > "++ stat -c %y /var/lib/config-data/keystone.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:03.995870300 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/keystone", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/keystone", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/keystone.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/keystone --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/keystone --mtime=1970-01-01", > "2018-08-20 10:27:18,597 INFO: 28340 -- Removing container: docker-puppet-keystone", > "2018-08-20 10:27:18,651 DEBUG: 28340 -- docker-puppet-keystone", > "2018-08-20 10:27:18,651 INFO: 28340 -- Finished processing puppet configs for keystone", > "2018-08-20 10:27:18,652 INFO: 28340 -- Starting configuration of memcached using image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-08-17.2", > "2018-08-20 10:27:18,652 DEBUG: 28340 -- config_volume memcached", > "2018-08-20 10:27:18,652 DEBUG: 28340 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-08-20 10:27:18,652 DEBUG: 28340 -- manifest include ::tripleo::profile::base::memcached", > "2018-08-20 10:27:18,652 DEBUG: 28340 -- config_image 192.168.24.1:8787/rhosp14/openstack-memcached:2018-08-17.2", > "2018-08-20 10:27:18,652 DEBUG: 28340 -- volumes []", > "2018-08-20 10:27:18,654 INFO: 28340 -- Removing container: docker-puppet-memcached", > "2018-08-20 10:27:18,725 INFO: 28340 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-memcached:2018-08-17.2", > "2018-08-20 10:27:20,199 DEBUG: 28340 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-memcached ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-memcached", > "2a4289740bd7: Pulling fs layer", > "2a4289740bd7: Download complete", > "2a4289740bd7: Pull complete", > "Digest: sha256:8dcd48ae17f6431a5fb5fabd8e9de93eff332f9a128d3719053180039fc96900", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-memcached:2018-08-17.2", > "2018-08-20 10:27:20,202 DEBUG: 28340 -- NET_HOST enabled", > "2018-08-20 10:27:20,202 DEBUG: 28340 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-memcached --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=memcached --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpJwp3oa:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-memcached:2018-08-17.2", > "2018-08-20 10:27:23,560 DEBUG: 28339 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 4.45 seconds", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}4f3bcbde7510fa19b7c63283a7470976'", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[/var/www/cgi-bin/nova]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/File[nova_api_wsgi]/ensure: defined content as '{md5}8bcfb466d72544dd31a4f339243ed669'", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/instance_name_template]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[wsgi/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/enabled_apis]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_listen_port]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/osapi_compute_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/use_forwarded_for]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[api/fping_path]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/service_metadata_proxy]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[neutron/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Nova_config[DEFAULT/allow_resize_to_same_host]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/dhcp_domain]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created", > "Notice: /Stage[main]/Nova::Conductor/Nova_config[conductor/workers]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/driver]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler/Nova_config[scheduler/discover_hosts_in_cells_interval]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[scheduler/max_attempts]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/host_subset_size]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_io_ops_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/max_instances_per_host]/ensure: created", > "Notice: /Stage[main]/Nova::Scheduler::Filter/Nova_config[filter_scheduler/weight_classes]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_host]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/novncproxy_port]/ensure: created", > "Notice: /Stage[main]/Nova::Vncproxy/Nova_config[vnc/auth_schemes]/ensure: created", > "Notice: /Stage[main]/Nova::Policy/Oslo::Policy[nova_config]/Nova_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Nova::Api/Oslo::Middleware[nova_config]/Nova_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Archive_deleted_rows/Cron[nova-manage db archive_deleted_rows]/ensure: created", > "Notice: /Stage[main]/Nova::Cron::Purge_shadow_tables/Cron[nova-manage db purge]/ensure: created", > "Notice: /Stage[main]/Nova::Wsgi::Apache_api/Openstacklib::Wsgi::Apache[nova_api_wsgi]/Apache::Vhost[nova_api_wsgi]/Concat[10-nova_api_wsgi.conf]/File[/etc/httpd/conf.d/10-nova_api_wsgi.conf]/ensure: defined content as '{md5}5fb7a8f737662544790610b5d8f92ceb'", > "Notice: Applied catalog in 11.01 seconds", > " Total: 181", > " Success: 181", > " Changed: 181", > " Out of sync: 181", > " Total: 505", > " Skipped: 75", > " Cron: 0.03", > " Total: 15.25", > " Last run: 1534760841", > " Config retrieval: 5.23", > " Nova config: 9.47", > " Config: 1534760825", > "Gathering files modified after 2018-08-20 10:26:58.439865663 +0000", > "2018-08-20 10:27:23,560 DEBUG: 28339 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config'", > "+ origin_of_time=/var/lib/config-data/nova.origin_of_time", > "+ touch /var/lib/config-data/nova.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,nova_config,nova_config,nova_config,nova_config,nova_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/nova.pp\", 105]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/nova/manifests/init.pp\", 555]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/api.pp\", 92]", > "Warning: Scope(Class[Nova::Api]): Running nova metadata api via evenlet is deprecated and will be removed in Stein release.", > "Warning: Unknown variable: '::nova::api::default_floating_pool'. at /etc/puppet/modules/nova/manifests/network/neutron.pp:112:38", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/nova/manifests/scheduler/filter.pp\", 150]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/nova/scheduler.pp\", 32]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/nova", > "++ stat -c %y /var/lib/config-data/nova.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:26:58.439865663 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/nova", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/nova", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/nova.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/nova --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/nova --mtime=1970-01-01", > "2018-08-20 10:27:23,561 INFO: 28339 -- Removing container: docker-puppet-nova", > "2018-08-20 10:27:23,615 DEBUG: 28339 -- docker-puppet-nova", > "2018-08-20 10:27:23,615 INFO: 28339 -- Finished processing puppet configs for nova", > "2018-08-20 10:27:23,616 INFO: 28339 -- Starting configuration of iscsid using image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:27:23,616 DEBUG: 28339 -- config_volume iscsid", > "2018-08-20 10:27:23,616 DEBUG: 28339 -- puppet_tags file,file_line,concat,augeas,cron,iscsid_config", > "2018-08-20 10:27:23,616 DEBUG: 28339 -- manifest include ::tripleo::profile::base::iscsid", > "2018-08-20 10:27:23,616 DEBUG: 28339 -- config_image 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:27:23,616 DEBUG: 28339 -- volumes [u'/etc/iscsi:/etc/iscsi']", > "2018-08-20 10:27:23,618 INFO: 28339 -- Removing container: docker-puppet-iscsid", > "2018-08-20 10:27:23,681 INFO: 28339 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:27:24,320 DEBUG: 28339 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-iscsid ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-iscsid", > "f989f56727fb: Pulling fs layer", > "f989f56727fb: Verifying Checksum", > "f989f56727fb: Download complete", > "f989f56727fb: Pull complete", > "Digest: sha256:1fed697b95f255d2ed0c3ff9331f96cff5d71bb8b695d3004417b945b8902cdb", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:27:24,324 DEBUG: 28339 -- NET_HOST enabled", > "2018-08-20 10:27:24,324 DEBUG: 28339 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-iscsid --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpwgirOD:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/iscsi:/etc/iscsi --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-iscsid:2018-08-17.2", > "2018-08-20 10:27:24,839 DEBUG: 28338 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.11 seconds", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/auth_encryption_key]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_metadata_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/heat_waitcondition_server_url]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_resources_per_stack]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/num_engine_workers]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/convergence_engine]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/reauthentication_auth_method]/ensure: created", > "Notice: /Stage[main]/Heat::Engine/Heat_config[DEFAULT/max_nested_stack_depth]/ensure: created", > "Notice: Applied catalog in 1.87 seconds", > " Total: 48", > " Success: 48", > " Skipped: 21", > " Total: 223", > " Out of sync: 48", > " Changed: 48", > " Package: 0.04", > " Last run: 1534760843", > " Config retrieval: 2.45", > " Total: 4.13", > " Config: 1534760839", > "Gathering files modified after 2018-08-20 10:27:13.252878024 +0000", > "2018-08-20 10:27:24,840 DEBUG: 28338 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat.origin_of_time", > "+ touch /var/lib/config-data/heat.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat", > "++ stat -c %y /var/lib/config-data/heat.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:13.252878024 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat --mtime=1970-01-01", > "2018-08-20 10:27:24,840 INFO: 28338 -- Removing container: docker-puppet-heat", > "2018-08-20 10:27:24,874 DEBUG: 28338 -- docker-puppet-heat", > "2018-08-20 10:27:24,875 INFO: 28338 -- Finished processing puppet configs for heat", > "2018-08-20 10:27:24,875 INFO: 28338 -- Starting configuration of cinder using image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", > "2018-08-20 10:27:24,875 DEBUG: 28338 -- config_volume cinder", > "2018-08-20 10:27:24,875 DEBUG: 28338 -- puppet_tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line", > "2018-08-20 10:27:24,875 DEBUG: 28338 -- manifest include ::tripleo::profile::base::cinder::api", > "include ::tripleo::profile::base::cinder::backup::ceph", > "include ::tripleo::profile::base::cinder::scheduler", > "include ::tripleo::profile::base::lvm", > "2018-08-20 10:27:24,875 DEBUG: 28338 -- config_image 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", > "2018-08-20 10:27:24,875 DEBUG: 28338 -- volumes []", > "2018-08-20 10:27:24,876 INFO: 28338 -- Removing container: docker-puppet-cinder", > "2018-08-20 10:27:24,939 INFO: 28338 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", > "2018-08-20 10:27:28,277 DEBUG: 28340 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.69 seconds", > "Notice: /Stage[main]/Memcached/File[/etc/sysconfig/memcached]/content: content changed '{md5}a50ed62e82d31fb4cb2de2226650c545' to '{md5}161d577b650b3ff28537caca84c8244b'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Memcached/Systemd::Dropin_file[memcached.conf]/File[/etc/systemd/system/memcached.service.d/memcached.conf]/ensure: defined content as '{md5}c44e90292b030f86c3b82096b68fe9cc'", > "Notice: Applied catalog in 0.09 seconds", > " Total: 3", > " Success: 3", > " Skipped: 10", > " File: 0.07", > " Config retrieval: 0.81", > " Total: 0.88", > " Last run: 1534760847", > " Config: 1534760846", > "Gathering files modified after 2018-08-20 10:27:20.399883988 +0000", > "2018-08-20 10:27:28,278 DEBUG: 28340 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/memcached.origin_of_time", > "+ touch /var/lib/config-data/memcached.origin_of_time", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/memcached", > "++ stat -c %y /var/lib/config-data/memcached.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:20.399883988 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/memcached", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/memcached", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/memcached.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/memcached --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/memcached --mtime=1970-01-01", > "2018-08-20 10:27:28,278 INFO: 28340 -- Removing container: docker-puppet-memcached", > "2018-08-20 10:27:28,314 DEBUG: 28340 -- docker-puppet-memcached", > "2018-08-20 10:27:28,315 INFO: 28340 -- Finished processing puppet configs for memcached", > "2018-08-20 10:27:28,315 INFO: 28340 -- Starting configuration of panko using image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2", > "2018-08-20 10:27:28,315 DEBUG: 28340 -- config_volume panko", > "2018-08-20 10:27:28,315 DEBUG: 28340 -- puppet_tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config", > "2018-08-20 10:27:28,315 DEBUG: 28340 -- manifest include tripleo::profile::base::panko::api", > "2018-08-20 10:27:28,316 DEBUG: 28340 -- config_image 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2", > "2018-08-20 10:27:28,316 DEBUG: 28340 -- volumes []", > "2018-08-20 10:27:28,317 INFO: 28340 -- Removing container: docker-puppet-panko", > "2018-08-20 10:27:28,382 INFO: 28340 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2", > "2018-08-20 10:27:30,756 DEBUG: 28340 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-panko-api ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-panko-api", > "fcbb8aa15eb3: Pulling fs layer", > "7ff583aeac14: Pulling fs layer", > "7ff583aeac14: Verifying Checksum", > "7ff583aeac14: Download complete", > "fcbb8aa15eb3: Verifying Checksum", > "fcbb8aa15eb3: Download complete", > "fcbb8aa15eb3: Pull complete", > "7ff583aeac14: Pull complete", > "Digest: sha256:19b1e16edeefec694c7906b75dbde88f16b1bcec49c66cff38ee643496b81fad", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2", > "2018-08-20 10:27:30,759 DEBUG: 28340 -- NET_HOST enabled", > "2018-08-20 10:27:30,759 DEBUG: 28340 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-panko --env PUPPET_TAGS=file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config --env NAME=panko --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpxW1gly:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-panko-api:2018-08-17.2", > "2018-08-20 10:27:32,330 DEBUG: 28339 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.55 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully", > "Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created", > " Total: 2", > " Success: 2", > " Total: 10", > " Out of sync: 2", > " Changed: 2", > " Skipped: 8", > " Exec: 0.02", > " Config retrieval: 0.68", > " Total: 0.70", > " Last run: 1534760851", > " Config: 1534760850", > "Gathering files modified after 2018-08-20 10:27:24.575887473 +0000", > "2018-08-20 10:27:32,330 DEBUG: 28339 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,iscsid_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,iscsid_config'", > "+ origin_of_time=/var/lib/config-data/iscsid.origin_of_time", > "+ touch /var/lib/config-data/iscsid.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,iscsid_config /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/iscsid", > "++ stat -c %y /var/lib/config-data/iscsid.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:24.575887473 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/iscsid", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/iscsid", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/iscsid.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/iscsid --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/iscsid --mtime=1970-01-01", > "2018-08-20 10:27:32,330 INFO: 28339 -- Removing container: docker-puppet-iscsid", > "2018-08-20 10:27:32,364 DEBUG: 28339 -- docker-puppet-iscsid", > "2018-08-20 10:27:32,365 INFO: 28339 -- Finished processing puppet configs for iscsid", > "2018-08-20 10:27:32,365 INFO: 28339 -- Starting configuration of glance_api using image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2", > "2018-08-20 10:27:32,365 DEBUG: 28339 -- config_volume glance_api", > "2018-08-20 10:27:32,365 DEBUG: 28339 -- puppet_tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config", > "2018-08-20 10:27:32,365 DEBUG: 28339 -- manifest include ::tripleo::profile::base::glance::api", > "2018-08-20 10:27:32,365 DEBUG: 28339 -- config_image 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2", > "2018-08-20 10:27:32,365 DEBUG: 28339 -- volumes []", > "2018-08-20 10:27:32,366 INFO: 28339 -- Removing container: docker-puppet-glance_api", > "2018-08-20 10:27:32,432 INFO: 28339 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2", > "2018-08-20 10:27:33,620 DEBUG: 28338 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-api ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-api", > "c086fc84b8c8: Pulling fs layer", > "ee70e53c0782: Pulling fs layer", > "ee70e53c0782: Verifying Checksum", > "ee70e53c0782: Download complete", > "c086fc84b8c8: Verifying Checksum", > "c086fc84b8c8: Download complete", > "c086fc84b8c8: Pull complete", > "ee70e53c0782: Pull complete", > "Digest: sha256:e13449f3298a8b3c8e9b24ee77263632c927635e1569957c4f8071fbaaf82adb", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", > "2018-08-20 10:27:33,623 DEBUG: 28338 -- NET_HOST enabled", > "2018-08-20 10:27:33,623 DEBUG: 28338 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-cinder --env PUPPET_TAGS=file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line --env NAME=cinder --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpnCpMb2:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cinder-api:2018-08-17.2", > "2018-08-20 10:27:38,645 DEBUG: 28339 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-glance-api ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-glance-api", > "a53840c25d8e: Pulling fs layer", > "673426487b22: Pulling fs layer", > "673426487b22: Verifying Checksum", > "673426487b22: Download complete", > "a53840c25d8e: Verifying Checksum", > "a53840c25d8e: Download complete", > "a53840c25d8e: Pull complete", > "673426487b22: Pull complete", > "Digest: sha256:951dcaf7e1bc99d02c63cc3bcddb722163eaec604b8283b924830af077c969c8", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2", > "2018-08-20 10:27:38,648 DEBUG: 28339 -- NET_HOST enabled", > "2018-08-20 10:27:38,648 DEBUG: 28339 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-glance_api --env PUPPET_TAGS=file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config --env NAME=glance_api --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpRPPOCu:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-glance-api:2018-08-17.2", > "2018-08-20 10:27:44,098 DEBUG: 28340 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.52 seconds", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/host]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/port]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/workers]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[api/max_limit]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_config[database/event_time_to_live]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Panko_api_paste_ini[pipeline:main/pipeline]/ensure: created", > "Notice: /Stage[main]/Panko::Expirer/Cron[panko-expirer]/ensure: created", > "Notice: /Stage[main]/Panko::Logging/Oslo::Log[panko_config]/Panko_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Panko::Db/Oslo::Db[panko_config]/Panko_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Panko::Policy/Oslo::Policy[panko_config]/Panko_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Panko::Keystone::Authtoken/Keystone::Resource::Authtoken[panko_config]/Panko_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Panko::Api/Oslo::Middleware[panko_config]/Panko_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}83ed74d75e6969c931075bd7f8c4c5c6'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[/var/www/cgi-bin/panko]/ensure: created", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/File[panko_wsgi]/ensure: defined content as '{md5}e6f446b6267321fd2251a3e83021181a'", > "Notice: /Stage[main]/Panko::Wsgi::Apache/Openstacklib::Wsgi::Apache[panko_wsgi]/Apache::Vhost[panko_wsgi]/Concat[10-panko_wsgi.conf]/File[/etc/httpd/conf.d/10-panko_wsgi.conf]/ensure: defined content as '{md5}bfdade05977c387c2e864c291e53d1ec'", > "Notice: Applied catalog in 1.09 seconds", > " Total: 101", > " Success: 101", > " Changed: 101", > " Out of sync: 101", > " Total: 256", > " Panko api paste ini: 0.00", > " Panko config: 0.21", > " File: 0.36", > " Last run: 1534760862", > " Config retrieval: 4.04", > " Total: 4.68", > " Config: 1534760857", > "Gathering files modified after 2018-08-20 10:27:30.976892814 +0000", > "2018-08-20 10:27:44,099 DEBUG: 28340 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config'", > "+ origin_of_time=/var/lib/config-data/panko.origin_of_time", > "+ touch /var/lib/config-data/panko.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,panko_api_paste_ini,panko_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/config.pp\", 33]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko.pp\", 32]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/panko/manifests/db.pp\", 59]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/panko/api.pp\", 83]", > "Warning: Scope(Class[Panko::Api]): This Class is deprecated and will be removed in future releases.", > "Warning: Scope(Class[Panko::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/panko", > "++ stat -c %y /var/lib/config-data/panko.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:30.976892814 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/panko", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/panko", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/panko.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/panko --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/panko --mtime=1970-01-01", > "2018-08-20 10:27:44,099 INFO: 28340 -- Removing container: docker-puppet-panko", > "2018-08-20 10:27:44,148 DEBUG: 28340 -- docker-puppet-panko", > "2018-08-20 10:27:44,148 INFO: 28340 -- Finished processing puppet configs for panko", > "2018-08-20 10:27:44,148 INFO: 28340 -- Starting configuration of crond using image 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:27:44,148 DEBUG: 28340 -- config_volume crond", > "2018-08-20 10:27:44,149 DEBUG: 28340 -- puppet_tags file,file_line,concat,augeas,cron", > "2018-08-20 10:27:44,149 DEBUG: 28340 -- manifest include ::tripleo::profile::base::logging::logrotate", > "2018-08-20 10:27:44,149 DEBUG: 28340 -- config_image 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:27:44,149 DEBUG: 28340 -- volumes []", > "2018-08-20 10:27:44,150 INFO: 28340 -- Removing container: docker-puppet-crond", > "2018-08-20 10:27:44,215 INFO: 28340 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:27:44,737 DEBUG: 28340 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cron ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-cron", > "919f91872d6f: Pulling fs layer", > "919f91872d6f: Verifying Checksum", > "919f91872d6f: Download complete", > "919f91872d6f: Pull complete", > "Digest: sha256:373f758caa0aef7f9e786c29b62a7665961ad46e10b1981de52c43135c4f20f7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:27:44,740 DEBUG: 28340 -- NET_HOST enabled", > "2018-08-20 10:27:44,741 DEBUG: 28340 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-crond --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpzvpIVi:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-cron:2018-08-17.2", > "2018-08-20 10:27:51,168 DEBUG: 28339 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.21 seconds", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/bind_port]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/workers]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_image_direct_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/show_multiple_locations]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_cache_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enabled_import_methods]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/node_staging_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/image_member_quota]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v1_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/enable_v2_api]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[glance_store/stores]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[glance_store/os_region_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_cache_config[DEFAULT/registry_host]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Glance_api_config[paste_deploy/flavor]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_user]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/rbd_store_pool]/ensure: created", > "Notice: /Stage[main]/Glance::Backend::Rbd/Glance_api_config[glance_store/default_store]/ensure: created", > "Notice: /Stage[main]/Glance::Policy/Oslo::Policy[glance_api_config]/Glance_api_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Db/Oslo::Db[glance_api_config]/Glance_api_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Logging/Oslo::Log[glance_api_config]/Glance_api_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_file]/ensure: created", > "Notice: /Stage[main]/Glance::Cache::Logging/Oslo::Log[glance_cache_config]/Glance_cache_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api::Authtoken/Keystone::Resource::Authtoken[glance_api_config]/Glance_api_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Glance::Api/Oslo::Middleware[glance_api_config]/Glance_api_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Rabbit[glance_api_config]/Glance_api_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Default[glance_api_config]/Glance_api_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Glance::Notify::Rabbitmq/Oslo::Messaging::Notifications[glance_api_config]/Glance_api_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: Applied catalog in 2.47 seconds", > " Total: 44", > " Success: 44", > " Total: 255", > " Out of sync: 44", > " Changed: 44", > " Skipped: 60", > " Glance cache config: 0.22", > " Glance api config: 1.94", > " Last run: 1534760869", > " Config retrieval: 2.53", > " Total: 4.76", > " Config: 1534760865", > "Gathering files modified after 2018-08-20 10:27:38.823899145 +0000", > "2018-08-20 10:27:51,168 DEBUG: 28339 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config'", > "+ origin_of_time=/var/lib/config-data/glance_api.origin_of_time", > "+ touch /var/lib/config-data/glance_api.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,glance_api_config,glance_api_paste_ini,glance_swift_config,glance_cache_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/config.pp\", 48]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/glance/api.pp\", 198]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/glance/manifests/api/db.pp\", 69]:[\"/etc/puppet/modules/glance/manifests/api.pp\", 371]", > "Warning: Unknown variable: 'default_store_real'. at /etc/puppet/modules/glance/manifests/api.pp:438:9", > "Warning: Scope(Class[Glance::Api]): default_store not provided, it will be automatically set to http", > "Warning: Scope(Class[Glance::Api::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/glance_api", > "++ stat -c %y /var/lib/config-data/glance_api.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:38.823899145 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/glance_api", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/glance_api", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/glance_api.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/glance_api --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/glance_api --mtime=1970-01-01", > "2018-08-20 10:27:51,168 INFO: 28339 -- Removing container: docker-puppet-glance_api", > "2018-08-20 10:27:51,212 DEBUG: 28339 -- docker-puppet-glance_api", > "2018-08-20 10:27:51,212 INFO: 28339 -- Finished processing puppet configs for glance_api", > "2018-08-20 10:27:51,212 INFO: 28339 -- Starting configuration of rabbitmq using image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", > "2018-08-20 10:27:51,212 DEBUG: 28339 -- config_volume rabbitmq", > "2018-08-20 10:27:51,212 DEBUG: 28339 -- puppet_tags file,file_line,concat,augeas,cron,file", > "2018-08-20 10:27:51,212 DEBUG: 28339 -- manifest ['Rabbitmq_policy', 'Rabbitmq_user'].each |String $val| { noop_resource($val) }", > "2018-08-20 10:27:51,213 DEBUG: 28339 -- config_image 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", > "2018-08-20 10:27:51,213 DEBUG: 28339 -- volumes []", > "2018-08-20 10:27:51,214 INFO: 28339 -- Removing container: docker-puppet-rabbitmq", > "2018-08-20 10:27:51,279 INFO: 28339 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", > "2018-08-20 10:27:51,568 DEBUG: 28338 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.91 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Lvm/Augeas[udev options in lvm.conf]/returns: executed successfully", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}7dbba0ad6f107a5d6775f284addccc35'", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/api_paste_config]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/storage_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/default_availability_zone]/ensure: created", > "Notice: /Stage[main]/Cinder/Cinder_config[DEFAULT/enable_v3_api]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_servers]/ensure: created", > "Notice: /Stage[main]/Cinder::Glance/Cinder_config[DEFAULT/glance_api_version]/ensure: created", > "Notice: /Stage[main]/Cinder::Cron::Db_purge/Cron[cinder-manage db purge]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_listen]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/osapi_volume_workers]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[DEFAULT/nova_catalog_info]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Cinder_config[key_manager/backend]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_user]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_chunk_size]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_pool]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_unit]/ensure: created", > "Notice: /Stage[main]/Cinder::Backup::Ceph/Cinder_config[DEFAULT/backup_ceph_stripe_count]/ensure: created", > "Notice: /Stage[main]/Cinder::Scheduler/Cinder_config[DEFAULT/scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[DEFAULT/enabled_backends]/ensure: created", > "Notice: /Stage[main]/Cinder::Backends/Cinder_config[tripleo_ceph/backend_host]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Db/Oslo::Db[cinder_config]/Cinder_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Cinder::Logging/Oslo::Log[cinder_config]/Cinder_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Rabbit[cinder_config]/Cinder_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Messaging::Default[cinder_config]/Cinder_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Cinder/Oslo::Concurrency[cinder_config]/Cinder_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Cinder::Ceilometer/Oslo::Messaging::Notifications[cinder_config]/Cinder_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Policy/Oslo::Policy[cinder_config]/Cinder_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Cinder::Api/Oslo::Middleware[cinder_config]/Cinder_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Keystone::Authtoken/Keystone::Resource::Authtoken[cinder_config]/Cinder_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/File[cinder_wsgi]/ensure: defined content as '{md5}870efbe437d63cd260287cd36472d7b1'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_backend_name]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/volume_driver]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_ceph_conf]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_user]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_pool]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/Cinder_config[tripleo_ceph/rbd_secret_uuid]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File[/etc/sysconfig/openstack-cinder-volume]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Cinder::Volume::Rbd/Cinder::Backend::Rbd[tripleo_ceph]/File_line[set initscript env tripleo_ceph]/ensure: created", > "Notice: /Stage[main]/Cinder::Wsgi::Apache/Openstacklib::Wsgi::Apache[cinder_wsgi]/Apache::Vhost[cinder_wsgi]/Concat[10-cinder_wsgi.conf]/File[/etc/httpd/conf.d/10-cinder_wsgi.conf]/ensure: defined content as '{md5}083eb77078c11a38e340afdc95d1c1aa'", > "Notice: Applied catalog in 5.07 seconds", > " Total: 134", > " Success: 134", > " Changed: 134", > " Out of sync: 134", > " Skipped: 37", > " Total: 376", > " File line: 0.00", > " File: 0.29", > " Augeas: 0.69", > " Cinder config: 3.38", > " Config retrieval: 4.57", > " Total: 8.99", > " Config: 1534760860", > "Gathering files modified after 2018-08-20 10:27:33.829895195 +0000", > "2018-08-20 10:27:51,568 DEBUG: 28338 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line'", > "+ origin_of_time=/var/lib/config-data/cinder.origin_of_time", > "+ touch /var/lib/config-data/cinder.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line,cinder_config,file,concat,file_line /etc/config.pp", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/db.pp\", 69]:[\"/etc/puppet/modules/cinder/manifests/init.pp\", 320]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/cinder.pp\", 127]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/cinder/manifests/api.pp\", 203]:[\"/etc/config.pp\", 2]", > "Warning: Scope(Class[Cinder::Api]): The nova_catalog_admin_info parameter has been deprecated and will be removed in the future release.", > "Warning: Scope(Class[Cinder::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/backup.pp:83:18", > "Warning: Unknown variable: 'ensure'. at /etc/puppet/modules/cinder/manifests/volume.pp:64:18", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/cinder", > "++ stat -c %y /var/lib/config-data/cinder.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:33.829895195 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/cinder", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/cinder", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/cinder.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/cinder --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/cinder --mtime=1970-01-01", > "2018-08-20 10:27:51,568 INFO: 28338 -- Removing container: docker-puppet-cinder", > "2018-08-20 10:27:51,622 DEBUG: 28338 -- docker-puppet-cinder", > "2018-08-20 10:27:51,622 INFO: 28338 -- Finished processing puppet configs for cinder", > "2018-08-20 10:27:51,622 INFO: 28338 -- Starting configuration of swift using image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:27:51,622 DEBUG: 28338 -- config_volume swift", > "2018-08-20 10:27:51,622 DEBUG: 28338 -- puppet_tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server", > "2018-08-20 10:27:51,622 DEBUG: 28338 -- manifest include ::tripleo::profile::base::swift::proxy", > "include ::tripleo::profile::base::swift::storage", > "2018-08-20 10:27:51,623 DEBUG: 28338 -- config_image 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:27:51,623 DEBUG: 28338 -- volumes []", > "2018-08-20 10:27:51,625 INFO: 28338 -- Removing container: docker-puppet-swift", > "2018-08-20 10:27:51,679 INFO: 28338 -- Image already exists: 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:27:51,682 DEBUG: 28338 -- NET_HOST enabled", > "2018-08-20 10:27:51,683 DEBUG: 28338 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-swift --env PUPPET_TAGS=file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server --env NAME=swift --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpn4jjhS:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-swift-proxy-server:2018-08-17.2", > "2018-08-20 10:27:52,257 DEBUG: 28340 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.45 seconds", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{md5}5281f207697925ddab4d83d74a751eb4'", > "Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created", > " Skipped: 7", > " Total: 9", > " Config retrieval: 0.55", > " Total: 0.56", > " Last run: 1534760871", > " Config: 1534760870", > "Gathering files modified after 2018-08-20 10:27:44.939903520 +0000", > "2018-08-20 10:27:52,257 DEBUG: 28340 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron'", > "+ origin_of_time=/var/lib/config-data/crond.origin_of_time", > "+ touch /var/lib/config-data/crond.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron /etc/config.pp", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/crond", > "++ stat -c %y /var/lib/config-data/crond.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:44.939903520 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/crond", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/crond", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/crond.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/crond --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/crond --mtime=1970-01-01", > "2018-08-20 10:27:52,257 INFO: 28340 -- Removing container: docker-puppet-crond", > "2018-08-20 10:27:52,296 DEBUG: 28340 -- docker-puppet-crond", > "2018-08-20 10:27:52,296 INFO: 28340 -- Finished processing puppet configs for crond", > "2018-08-20 10:27:52,297 INFO: 28340 -- Starting configuration of haproxy using image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2", > "2018-08-20 10:27:52,297 DEBUG: 28340 -- config_volume haproxy", > "2018-08-20 10:27:52,297 DEBUG: 28340 -- puppet_tags file,file_line,concat,augeas,cron,haproxy_config", > "2018-08-20 10:27:52,297 DEBUG: 28340 -- manifest exec {'wait-for-settle': command => '/bin/true' }", > "2018-08-20 10:27:52,297 DEBUG: 28340 -- config_image 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2", > "2018-08-20 10:27:52,297 DEBUG: 28340 -- volumes [u'/etc/ipa/ca.crt:/etc/ipa/ca.crt:ro', u'/etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro', u'/etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro', u'/etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro']", > "2018-08-20 10:27:52,299 INFO: 28340 -- Removing container: docker-puppet-haproxy", > "2018-08-20 10:27:52,378 INFO: 28340 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2", > "2018-08-20 10:27:56,163 DEBUG: 28339 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-rabbitmq ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-rabbitmq", > "bfcea0418e79: Pulling fs layer", > "bfcea0418e79: Verifying Checksum", > "bfcea0418e79: Download complete", > "bfcea0418e79: Pull complete", > "Digest: sha256:d22aa7045ecdeeb2ff38b7e5f0ab6bfcfbd09eb5be6839b7d6eb0bff3b3536d7", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", > "2018-08-20 10:27:56,166 DEBUG: 28339 -- NET_HOST enabled", > "2018-08-20 10:27:56,166 DEBUG: 28339 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-rabbitmq --env PUPPET_TAGS=file,file_line,concat,augeas,cron,file --env NAME=rabbitmq --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpMfatLn:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-rabbitmq:2018-08-17.2", > "2018-08-20 10:27:56,852 DEBUG: 28340 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-haproxy ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-haproxy", > "7d656e0856db: Pulling fs layer", > "7d656e0856db: Download complete", > "7d656e0856db: Pull complete", > "Digest: sha256:39b7716a8916774b85fb6f73cd06899f1fe04b57721aecf36e0fb6cbaf8ac294", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2", > "2018-08-20 10:27:56,856 DEBUG: 28340 -- NET_HOST enabled", > "2018-08-20 10:27:56,856 DEBUG: 28340 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-haproxy --env PUPPET_TAGS=file,file_line,concat,augeas,cron,haproxy_config --env NAME=haproxy --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmprHseqV:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /etc/ipa/ca.crt:/etc/ipa/ca.crt:ro --volume /etc/pki/tls/private/haproxy:/etc/pki/tls/private/haproxy:ro --volume /etc/pki/tls/certs/haproxy:/etc/pki/tls/certs/haproxy:ro --volume /etc/pki/tls/private/overcloud_endpoint.pem:/etc/pki/tls/private/overcloud_endpoint.pem:ro --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-haproxy:2018-08-17.2", > "2018-08-20 10:28:01,708 DEBUG: 28338 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.83 seconds", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/api_class]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/username]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Keymaster/Swift_keymaster_config[kms_keymaster/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.16:11211'", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/auto_create_account_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/concurrency]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/expiring_objects_account_name]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/process]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/processes]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/reclaim_age]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/recon_cache_path]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/report_interval]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Objectexpirer/Swift_object_expirer_config[object-expirer/log_level]/ensure: created", > "Notice: /Stage[main]/Rsync::Server/Xinetd::Service[rsync]/File[/rsync]/ensure: defined content as '{md5}bba7398694e0bdc0eb408c8b2f07f221'", > "Notice: /Stage[main]/Rsync::Server/Concat[/etc/rsyncd.conf]/File[/etc/rsyncd.conf]/content: content changed '{md5}c63fccb45c0dcbbbe17d0f4bdba920ec' to '{md5}1f451659d9ad43edfffdf995424b221f'", > "Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed '%SWIFT_HASH_PATH_SUFFIX%' to 'QutiRe06EFQrjWGuRgNVemcwO'", > "Notice: /Stage[main]/Swift/Swift_config[swift-constraints/max_header_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/bind_ip]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/workers]/value: value changed '8' to 'auto'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_headers]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[DEFAULT/log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[pipeline:main/pipeline]/value: value changed 'catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server' to 'catch_errors healthcheck proxy-logging cache ratelimit bulk tempurl formpost authtoken s3api s3token keystone staticweb copy container_quotas account_quotas slo dlo versioned_writes proxy-logging proxy-server'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_facility]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_level]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/set log_address]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/log_handoffs]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/allow_account_management]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/account_autocreate]/value: value changed 'true' to 'True'", > "Notice: /Stage[main]/Swift::Proxy/Swift_proxy_config[app:proxy-server/node_timeout]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Cache/Swift_proxy_config[filter:cache/memcache_servers]/value: value changed '127.0.0.1:11211' to '172.17.1.16:11211'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/operator_roles]/value: value changed 'admin, SwiftOperator' to 'admin, swiftoperator, ResellerAdmin'", > "Notice: /Stage[main]/Swift::Proxy::Keystone/Swift_proxy_config[filter:keystone/reseller_prefix]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/File[/var/cache/swift]/mode: mode changed '0755' to '0700'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/log_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/signing_dir]/value: value changed '/tmp/keystone-signing-swift' to '/var/cache/swift'", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/auth_plugin]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/username]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/delay_auth_decision]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/cache]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/include_service_catalog]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Staticweb/Swift_proxy_config[filter:staticweb/url_base]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/clock_accuracy]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/max_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/log_sleep_time_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/rate_buffer_seconds]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Ratelimit/Swift_proxy_config[filter:ratelimit/account_ratelimit]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Formpost/Swift_proxy_config[filter:formpost/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_containers_per_extraction]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_failed_extractions]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/max_deletes_per_request]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Bulk/Swift_proxy_config[filter:bulk/yield_frequency]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Versioned_writes/Swift_proxy_config[filter:versioned_writes/allow_versioned_writes]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_segments]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_manifest_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/min_segment_size]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Slo/Swift_proxy_config[filter:slo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_after_segment]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/rate_limit_segments_per_sec]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Dlo/Swift_proxy_config[filter:dlo/max_get_time]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Copy/Swift_proxy_config[filter:copy/object_post_as_copy]/value: value changed 'false' to 'True'", > "Notice: /Stage[main]/Swift::Proxy::Container_quotas/Swift_proxy_config[filter:container_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Account_quotas/Swift_proxy_config[filter:account_quotas/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Encryption/Swift_proxy_config[filter:encryption/disable_encryption]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::Kms_keymaster/Swift_proxy_config[filter:kms_keymaster/keymaster_config_path]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3api/Swift_proxy_config[filter:s3api/auth_pipeline_check]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/use]/ensure: created", > "Notice: /Stage[main]/Swift::Proxy::S3token/Swift_proxy_config[filter:s3token/auth_uri]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node]/ensure: created", > "Notice: /Stage[main]/Tripleo::Profile::Base::Swift::Storage/File[/srv/node/d1]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Account/Swift::Storage::Generic[account]/File[/etc/swift/account-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Container/Swift::Storage::Generic[container]/File[/etc/swift/container-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::Object/Swift::Storage::Generic[object]/File[/etc/swift/object-server/]/ensure: created", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6002]/Concat[/etc/swift/account-server.conf]/File[/etc/swift/account-server.conf]/ensure: defined content as '{md5}5517838f1776ef86ace12aa6a72a66a1'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6001]/Concat[/etc/swift/container-server.conf]/File[/etc/swift/container-server.conf]/ensure: defined content as '{md5}be2da7fc275b083cf2f130a1fe4b0440'", > "Notice: /Stage[main]/Swift::Storage::All/Swift::Storage::Server[6000]/Concat[/etc/swift/object-server.conf]/File[/etc/swift/object-server.conf]/ensure: defined content as '{md5}a4d55e548173687f41092bdd4d304af4'", > "Notice: Applied catalog in 0.48 seconds", > " Total: 97", > " Success: 97", > " Total: 192", > " Out of sync: 97", > " Changed: 97", > " Swift config: 0.00", > " Swift keymaster config: 0.01", > " Swift object expirer config: 0.01", > " Swift proxy config: 0.17", > " Last run: 1534760880", > " Config retrieval: 2.15", > " Total: 2.38", > " Config: 1534760878", > "Gathering files modified after 2018-08-20 10:27:51.903908501 +0000", > "2018-08-20 10:28:01,708 DEBUG: 28338 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server'", > "+ origin_of_time=/var/lib/config-data/swift.origin_of_time", > "+ touch /var/lib/config-data/swift.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,swift_config,swift_proxy_config,swift_keymaster_config,swift_config,swift_container_config,swift_container_sync_realms_config,swift_account_config,swift_object_config,swift_object_expirer_config,rsync::server /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/config.pp\", 38]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 147]", > " with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 163]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/proxy.pp\", 165]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/swift/proxy.pp\", 148]", > "Warning: Unknown variable: 'methods_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:100:56", > "Warning: Unknown variable: 'incoming_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:101:56", > "Warning: Unknown variable: 'incoming_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:102:56", > "Warning: Unknown variable: 'outgoing_remove_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:103:56", > "Warning: Unknown variable: 'outgoing_allow_headers_real'. at /etc/puppet/modules/swift/manifests/proxy/tempurl.pp:104:56", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the object storage server has changed from 6000 to 6200 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the container storage server has changed from 6001 to 6201 and will be changed in a later release", > "Warning: Scope(Class[Swift::Storage::All]): The default port for the account storage server has changed from 6002 to 6202 and will be changed in a later release", > "Warning: Class 'xinetd' is already defined at /etc/config.pp:6; cannot redefine at /etc/puppet/modules/xinetd/manifests/init.pp:12", > "Warning: Unknown variable: 'xinetd::params::default_user'. at /etc/puppet/modules/xinetd/manifests/service.pp:110:14", > "Warning: Unknown variable: 'xinetd::params::default_group'. at /etc/puppet/modules/xinetd/manifests/service.pp:116:15", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:161:13", > "Warning: Unknown variable: 'xinetd::service_name'. at /etc/puppet/modules/xinetd/manifests/service.pp:166:24", > "Warning: Unknown variable: 'xinetd::confdir'. at /etc/puppet/modules/xinetd/manifests/service.pp:167:21", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 189]:", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/swift/manifests/storage/server.pp\", 203]:", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/swift", > "++ stat -c %y /var/lib/config-data/swift.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:51.903908501 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/swift", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/swift", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/swift.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/swift --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/swift --mtime=1970-01-01", > "2018-08-20 10:28:01,708 INFO: 28338 -- Removing container: docker-puppet-swift", > "2018-08-20 10:28:01,748 DEBUG: 28338 -- docker-puppet-swift", > "2018-08-20 10:28:01,748 INFO: 28338 -- Finished processing puppet configs for swift", > "2018-08-20 10:28:01,748 INFO: 28338 -- Starting configuration of heat_api_cfn using image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-08-17.2", > "2018-08-20 10:28:01,748 DEBUG: 28338 -- config_volume heat_api_cfn", > "2018-08-20 10:28:01,748 DEBUG: 28338 -- puppet_tags file,file_line,concat,augeas,cron,heat_config,file,concat,file_line", > "2018-08-20 10:28:01,748 DEBUG: 28338 -- manifest include ::tripleo::profile::base::heat::api_cfn", > "2018-08-20 10:28:01,748 DEBUG: 28338 -- config_image 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-08-17.2", > "2018-08-20 10:28:01,748 DEBUG: 28338 -- volumes []", > "2018-08-20 10:28:01,750 INFO: 28338 -- Removing container: docker-puppet-heat_api_cfn", > "2018-08-20 10:28:01,814 INFO: 28338 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-08-17.2", > "2018-08-20 10:28:02,415 DEBUG: 28338 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn", > "c6b04bd8872f: Already exists", > "35cc35212dc8: Pulling fs layer", > "35cc35212dc8: Verifying Checksum", > "35cc35212dc8: Download complete", > "35cc35212dc8: Pull complete", > "Digest: sha256:412c6409e9f8c7e7af115b204711265b750f51d3ec55660d904d6e23a93a1395", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-08-17.2", > "2018-08-20 10:28:02,418 DEBUG: 28338 -- NET_HOST enabled", > "2018-08-20 10:28:02,418 DEBUG: 28338 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-heat_api_cfn --env PUPPET_TAGS=file,file_line,concat,augeas,cron,heat_config,file,concat,file_line --env NAME=heat_api_cfn --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpTA2mqv:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-heat-api-cfn:2018-08-17.2", > "2018-08-20 10:28:06,566 DEBUG: 28340 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.44 seconds", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/content: content changed '{md5}1f337186b0e1ba5ee82760cb437fb810' to '{md5}f20d6ac1a8f78fea670abb8b757af490'", > "Notice: /Stage[main]/Haproxy/Haproxy::Instance[haproxy]/Haproxy::Config[haproxy]/Concat[/etc/haproxy/haproxy.cfg]/File[/etc/haproxy/haproxy.cfg]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.26 seconds", > " Changed: 1", > " Out of sync: 1", > " Total: 76", > " Last run: 1534760885", > " Config retrieval: 2.71", > " Total: 2.74", > " Config: 1534760882", > "Gathering files modified after 2018-08-20 10:27:57.065912193 +0000", > "2018-08-20 10:28:06,566 DEBUG: 28340 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,haproxy_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,haproxy_config'", > "+ origin_of_time=/var/lib/config-data/haproxy.origin_of_time", > "+ touch /var/lib/config-data/haproxy.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,haproxy_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/pacemaker/haproxy_with_vip.pp\", 65]:", > "Warning: Scope(Haproxy::Config[haproxy]): haproxy: The $merge_options parameter will default to true in the next major release. Please review the documentation regarding the implications.", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/haproxy", > "++ stat -c %y /var/lib/config-data/haproxy.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:57.065912193 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/haproxy", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/haproxy", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/haproxy.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/haproxy --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/haproxy --mtime=1970-01-01", > "2018-08-20 10:28:06,566 INFO: 28340 -- Removing container: docker-puppet-haproxy", > "2018-08-20 10:28:06,605 DEBUG: 28340 -- docker-puppet-haproxy", > "2018-08-20 10:28:06,605 INFO: 28340 -- Finished processing puppet configs for haproxy", > "2018-08-20 10:28:06,605 INFO: 28340 -- Starting configuration of ceilometer using image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:28:06,605 DEBUG: 28340 -- config_volume ceilometer", > "2018-08-20 10:28:06,605 DEBUG: 28340 -- puppet_tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config", > "2018-08-20 10:28:06,605 DEBUG: 28340 -- manifest include ::tripleo::profile::base::ceilometer::agent::polling", > "include ::tripleo::profile::base::ceilometer::agent::notification", > "2018-08-20 10:28:06,605 DEBUG: 28340 -- config_image 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:28:06,605 DEBUG: 28340 -- volumes []", > "2018-08-20 10:28:06,606 INFO: 28340 -- Removing container: docker-puppet-ceilometer", > "2018-08-20 10:28:06,665 INFO: 28340 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:28:08,754 DEBUG: 28340 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-ceilometer-central ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-ceilometer-central", > "8e8e24e487c6: Pulling fs layer", > "abd90b860525: Pulling fs layer", > "8e8e24e487c6: Verifying Checksum", > "8e8e24e487c6: Download complete", > "abd90b860525: Verifying Checksum", > "abd90b860525: Download complete", > "8e8e24e487c6: Pull complete", > "abd90b860525: Pull complete", > "Digest: sha256:b64134d855985bb79b71c51feabab7a9a4b3c5055bc40ae3a46583ce2f945685", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:28:08,757 DEBUG: 28340 -- NET_HOST enabled", > "2018-08-20 10:28:08,757 DEBUG: 28340 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-ceilometer --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpX16m1l:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-ceilometer-central:2018-08-17.2", > "2018-08-20 10:28:09,255 DEBUG: 28339 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 0.84 seconds", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq]/group: group changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/rabbitmq/ssl]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-env.config]/ensure: defined content as '{md5}b126e4b8423a26246952d34c225c6fdd'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq-inetrc]/ensure: defined content as '{md5}12f8d1a1f9f57f23c1be6c7bf2286e73'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmqadmin.conf]/ensure: defined content as '{md5}44d4ef5cb86ab30e6127e83939ef09c4'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d]/ensure: created", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/systemd/system/rabbitmq-server.service.d/limits.conf]/ensure: defined content as '{md5}91d370d2c5a1af171c9d5b5985fca733'", > "Notice: /Stage[main]/Rabbitmq::Config/File[/etc/security/limits.d/rabbitmq-server.conf]/ensure: defined content as '{md5}1030abc4db405b5f2969643e99bc7435'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/content: content changed '{md5}b346ec0a8320f85f795bf612f6b02da7' to '{md5}57b05a0845da806e8260a42ee69d6455'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/owner: owner changed 'rabbitmq' to 'root'", > "Notice: /Stage[main]/Rabbitmq::Config/File[rabbitmq.config]/mode: mode changed '0644' to '0640'", > "Notice: Applied catalog in 0.06 seconds", > " Total: 12", > " Success: 12", > " Total: 19", > " Out of sync: 9", > " Changed: 9", > " File: 0.04", > " Config retrieval: 1.01", > " Total: 1.05", > " Last run: 1534760888", > " Config: 1534760887", > "Gathering files modified after 2018-08-20 10:27:56.394911713 +0000", > "2018-08-20 10:28:09,255 DEBUG: 28339 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/rabbitmq.origin_of_time", > "+ touch /var/lib/config-data/rabbitmq.origin_of_time", > "Warning: ModuleLoader: module 'rabbitmq' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/rabbitmq", > "++ stat -c %y /var/lib/config-data/rabbitmq.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:27:56.394911713 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/rabbitmq", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/rabbitmq", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/rabbitmq.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/rabbitmq --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/rabbitmq --mtime=1970-01-01", > "2018-08-20 10:28:09,255 INFO: 28339 -- Removing container: docker-puppet-rabbitmq", > "2018-08-20 10:28:09,297 DEBUG: 28339 -- docker-puppet-rabbitmq", > "2018-08-20 10:28:09,297 INFO: 28339 -- Finished processing puppet configs for rabbitmq", > "2018-08-20 10:28:09,297 INFO: 28339 -- Starting configuration of neutron using image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:28:09,297 DEBUG: 28339 -- config_volume neutron", > "2018-08-20 10:28:09,297 DEBUG: 28339 -- puppet_tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2", > "2018-08-20 10:28:09,298 DEBUG: 28339 -- manifest include tripleo::profile::base::neutron::server", > "include ::tripleo::profile::base::neutron::plugins::ml2", > "include tripleo::profile::base::neutron::dhcp", > "include tripleo::profile::base::neutron::l3", > "include tripleo::profile::base::neutron::metadata", > "include ::tripleo::profile::base::neutron::ovs", > "2018-08-20 10:28:09,298 DEBUG: 28339 -- config_image 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:28:09,298 DEBUG: 28339 -- volumes [u'/lib/modules:/lib/modules:ro', u'/run/openvswitch:/run/openvswitch']", > "2018-08-20 10:28:09,299 INFO: 28339 -- Removing container: docker-puppet-neutron", > "2018-08-20 10:28:09,362 INFO: 28339 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:28:14,279 DEBUG: 28339 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-neutron-server ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-neutron-server", > "0005d75b4b48: Pulling fs layer", > "f7e4f140def4: Pulling fs layer", > "f7e4f140def4: Verifying Checksum", > "0005d75b4b48: Verifying Checksum", > "0005d75b4b48: Download complete", > "0005d75b4b48: Pull complete", > "f7e4f140def4: Pull complete", > "Digest: sha256:0cd0e9583a7627f44e295392eeb86e50c799918cf38ebd12c36a0714f43b759b", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:28:14,282 DEBUG: 28339 -- NET_HOST enabled", > "2018-08-20 10:28:14,282 DEBUG: 28339 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-neutron --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 --env NAME=neutron --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpvkfCuz:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-neutron-server:2018-08-17.2", > "2018-08-20 10:28:16,780 DEBUG: 28338 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 3.65 seconds", > "Notice: /Stage[main]/Heat::Api_cfn/Heat_config[heat_api_cfn/bind_host]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}6bfb91ec3128b1252913d8ba04a9c38f'", > "Notice: /Stage[main]/Apache::Mod::Headers/Apache::Mod[headers]/File[headers.load]/ensure: defined content as '{md5}96094c96352002c43ada5bdf8650ff38'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[/var/www/cgi-bin/heat]/ensure: created", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/File[heat_api_cfn_wsgi]/ensure: defined content as '{md5}c3ae61ab87649c8cdfab8977da2b194b'", > "Notice: /Stage[main]/Heat::Wsgi::Apache_api_cfn/Heat::Wsgi::Apache[api_cfn]/Openstacklib::Wsgi::Apache[heat_api_cfn_wsgi]/Apache::Vhost[heat_api_cfn_wsgi]/Concat[10-heat_api_cfn_wsgi.conf]/File[/etc/httpd/conf.d/10-heat_api_cfn_wsgi.conf]/ensure: defined content as '{md5}dec9ed78f8f4a5b645106fa3b8a3a776'", > "Notice: Applied catalog in 2.59 seconds", > " Total: 122", > " Success: 122", > " Changed: 122", > " Out of sync: 122", > " Total: 338", > " Heat config: 1.54", > " Last run: 1534760895", > " Config retrieval: 4.22", > " Total: 6.18", > " Config: 1534760888", > "Gathering files modified after 2018-08-20 10:28:02.652916189 +0000", > "2018-08-20 10:28:16,780 DEBUG: 28338 -- + mkdir -p /etc/puppet", > "+ origin_of_time=/var/lib/config-data/heat_api_cfn.origin_of_time", > "+ touch /var/lib/config-data/heat_api_cfn.origin_of_time", > " with Stdlib::Compat::Integer. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/heat/manifests/wsgi/apache_api_cfn.pp\", 125]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/heat_api_cfn", > "++ stat -c %y /var/lib/config-data/heat_api_cfn.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:28:02.652916189 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/heat_api_cfn", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/heat_api_cfn", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/heat_api_cfn.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/heat_api_cfn --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/heat_api_cfn --mtime=1970-01-01", > "2018-08-20 10:28:16,780 INFO: 28338 -- Removing container: docker-puppet-heat_api_cfn", > "2018-08-20 10:28:16,824 DEBUG: 28338 -- docker-puppet-heat_api_cfn", > "2018-08-20 10:28:16,825 INFO: 28338 -- Finished processing puppet configs for heat_api_cfn", > "2018-08-20 10:28:18,037 DEBUG: 28340 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.14 seconds", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/region_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/username]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/password]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/auth_type]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Auth/Ceilometer_config[service_credentials/interface]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/File[event_pipeline]/ensure: defined content as '{md5}e1b13cf3e430a5cacf9cd8ad4704c3b5'", > "Notice: /Stage[main]/Ceilometer::Agent::Notification/Ceilometer_config[notification/ack_on_event_error]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created", > "Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created", > "Notice: Applied catalog in 0.58 seconds", > " Total: 26", > " Success: 26", > " Total: 156", > " Out of sync: 26", > " Changed: 26", > " Skipped: 35", > " Ceilometer config: 0.44", > " Config retrieval: 1.35", > " Total: 1.79", > " Last run: 1534760897", > " Config: 1534760895", > "Gathering files modified after 2018-08-20 10:28:08.960920701 +0000", > "2018-08-20 10:28:18,038 DEBUG: 28340 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config'", > "+ origin_of_time=/var/lib/config-data/ceilometer.origin_of_time", > "+ touch /var/lib/config-data/ceilometer.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config /etc/config.pp", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/config.pp\", 35]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer.pp\", 111]", > "Warning: Scope(Class[Ceilometer::Dispatcher::Gnocchi]): The class ceilometer::dispatcher::gnocchi is deprecated. All its", > " options must be set as url parameters in", > " ceilometer::agent::notification::pipeline_publishers. Depending of the used", > " Gnocchi version their might be ignored.", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/ceilometer/manifests/agent/notification.pp\", 118]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/ceilometer/agent/notification.pp\", 34]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/spool/cron /var/lib/config-data/ceilometer", > "++ stat -c %y /var/lib/config-data/ceilometer.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:28:08.960920701 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/ceilometer", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/ceilometer", > "++ find /etc /root /opt /var/spool/cron -newer /var/lib/config-data/ceilometer.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/ceilometer --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/ceilometer --mtime=1970-01-01", > "2018-08-20 10:28:18,038 INFO: 28340 -- Removing container: docker-puppet-ceilometer", > "2018-08-20 10:28:18,080 DEBUG: 28340 -- docker-puppet-ceilometer", > "2018-08-20 10:28:18,080 INFO: 28340 -- Finished processing puppet configs for ceilometer", > "2018-08-20 10:28:26,743 DEBUG: 28339 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 2.99 seconds", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/bind_host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agents_per_network]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_id]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/endpoint_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[nova/tenant_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_status_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server::Notifications/Neutron_config[DEFAULT/notify_nova_on_port_data_changes]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/l3_ha]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/max_l3_agents_per_router]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/api_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/rpc_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_scheduler_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/router_distributed]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/enable_dvr]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Neutron_config[DEFAULT/allow_automatic_l3agent_failover]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_firewall_rule]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_network_gateway]/ensure: created", > "Notice: /Stage[main]/Neutron::Quota/Neutron_config[quotas/quota_packet_filter]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/neutron/plugin.ini]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/File[/etc/default/neutron-server]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/type_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/tenant_network_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/mechanism_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/path_mtu]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/extension_drivers]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[ml2/overlay_ip_version]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron_plugin_ml2[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_isolated_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/force_metadata]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/enable_metadata_network]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/state_path]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/resync_interval]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/root_helper]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_dns_servers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Dhcp/Neutron_dhcp_agent_config[DEFAULT/dnsmasq_local_resolv]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/interface_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::L3/Neutron_l3_agent_config[DEFAULT/agent_mode]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Metadata/Neutron_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/bridge_mappings]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/l2_population]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/arp_responder]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/enable_distributed_routing]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/drop_flows_on_start]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/extensions]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/integration_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[securitygroup/firewall_driver]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/tunnel_bridge]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[ovs/local_ip]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/tunnel_types]/ensure: created", > "Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Neutron_agent_ovs[agent/vxlan_udp_port]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created", > "Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created", > "Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/ssl]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/connection]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Db/Oslo::Db[neutron_config]/Neutron_config[database/db_max_retries]/ensure: created", > "Notice: /Stage[main]/Neutron::Policy/Oslo::Policy[neutron_config]/Neutron_config[oslo_policy/policy_file]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/www_authenticate_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_uri]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_type]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/auth_url]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/username]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/password]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/user_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Keystone::Authtoken/Keystone::Resource::Authtoken[neutron_config]/Neutron_config[keystone_authtoken/project_domain_name]/ensure: created", > "Notice: /Stage[main]/Neutron::Server/Oslo::Middleware[neutron_config]/Neutron_config[oslo_middleware/enable_proxy_headers_parsing]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vxlan_group]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vxlan]/Neutron_plugin_ml2[ml2_type_vxlan/vni_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[vlan]/Neutron_plugin_ml2[ml2_type_vlan/network_vlan_ranges]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[flat]/Neutron_plugin_ml2[ml2_type_flat/flat_networks]/ensure: created", > "Notice: /Stage[main]/Neutron::Plugins::Ml2/Neutron::Plugins::Ml2::Type_driver[gre]/Neutron_plugin_ml2[ml2_type_gre/tunnel_id_ranges]/ensure: created", > "Notice: Applied catalog in 1.57 seconds", > " Total: 105", > " Success: 105", > " Changed: 105", > " Out of sync: 105", > " Total: 357", > " Skipped: 44", > " Neutron api config: 0.00", > " Neutron l3 agent config: 0.01", > " Neutron metadata agent config: 0.02", > " Neutron dhcp agent config: 0.02", > " Neutron agent ovs: 0.06", > " Neutron plugin ml2: 0.08", > " Neutron config: 1.00", > " Last run: 1534760905", > " Config retrieval: 3.33", > " Total: 4.60", > " Config: 1534760900", > "Gathering files modified after 2018-08-20 10:28:14.477924647 +0000", > "2018-08-20 10:28:26,743 DEBUG: 28339 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2'", > "+ origin_of_time=/var/lib/config-data/neutron.origin_of_time", > "+ touch /var/lib/config-data/neutron.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,neutron_config,neutron_api_config,neutron_plugin_ml2,neutron_config,neutron_dhcp_agent_config,neutron_config,neutron_l3_agent_config,neutron_config,neutron_metadata_agent_config,neutron_config,neutron_agent_ovs,neutron_plugin_ml2 /etc/config.pp", > "PMD: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)", > "PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory", > "PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Array instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/neutron/manifests/init.pp\", 486]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/server.pp\", 104]", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/config.pp\", 136]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp\", 141]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/db.pp\", 69]:[\"/etc/puppet/modules/neutron/manifests/server.pp\", 290]", > "Warning: Scope(Class[Neutron::Keystone::Authtoken]): The auth_uri parameter is deprecated. Please use www_authenticate_uri instead.", > "Warning: Unknown variable: '::neutron::params::metadata_agent_package'. at /etc/puppet/modules/neutron/manifests/agents/metadata.pp:122:6", > " with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/neutron/manifests/agents/ml2/ovs.pp\", 208]:[\"/etc/puppet/modules/tripleo/manifests/profile/base/neutron/ovs.pp\", 59]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/neutron", > "++ stat -c %y /var/lib/config-data/neutron.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:28:14.477924647 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/neutron", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/neutron", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/neutron.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/neutron --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/neutron --mtime=1970-01-01", > "2018-08-20 10:28:26,743 INFO: 28339 -- Removing container: docker-puppet-neutron", > "2018-08-20 10:28:26,778 DEBUG: 28339 -- docker-puppet-neutron", > "2018-08-20 10:28:26,778 INFO: 28339 -- Finished processing puppet configs for neutron", > "2018-08-20 10:28:26,778 INFO: 28339 -- Starting configuration of horizon using image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2", > "2018-08-20 10:28:26,779 DEBUG: 28339 -- config_volume horizon", > "2018-08-20 10:28:26,779 DEBUG: 28339 -- puppet_tags file,file_line,concat,augeas,cron,horizon_config", > "2018-08-20 10:28:26,779 DEBUG: 28339 -- manifest include ::tripleo::profile::base::horizon", > "2018-08-20 10:28:26,779 DEBUG: 28339 -- config_image 192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2", > "2018-08-20 10:28:26,779 DEBUG: 28339 -- volumes []", > "2018-08-20 10:28:26,779 INFO: 28339 -- Removing container: docker-puppet-horizon", > "2018-08-20 10:28:26,839 INFO: 28339 -- Pulling image: 192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2", > "2018-08-20 10:28:32,128 DEBUG: 28339 -- Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-horizon ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-horizon", > "7fba42133f32: Pulling fs layer", > "7fba42133f32: Verifying Checksum", > "7fba42133f32: Download complete", > "7fba42133f32: Pull complete", > "Digest: sha256:203bbde4a1e1b966eb2760abf728abd9f941ec2c6e2bb6a952f85d376b00dab4", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2", > "2018-08-20 10:28:32,131 DEBUG: 28339 -- NET_HOST enabled", > "2018-08-20 10:28:32,131 DEBUG: 28339 -- Running docker command: /usr/bin/docker run --user root --name docker-puppet-horizon --env PUPPET_TAGS=file,file_line,concat,augeas,cron,horizon_config --env NAME=horizon --env HOSTNAME=controller-0 --env NO_ARCHIVE= --env STEP=6 --volume /etc/localtime:/etc/localtime:ro --volume /tmp/tmpemQujb:/etc/config.pp:ro,z --volume /etc/puppet/:/tmp/puppet-etc/:ro,z --volume /usr/share/openstack-puppet/modules/:/usr/share/openstack-puppet/modules/:ro,z --volume /var/lib/config-data:/var/lib/config-data/:z --volume tripleo_logs:/var/log/tripleo/ --volume /dev/log:/dev/log --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /var/lib/docker-puppet/docker-puppet.sh:/var/lib/docker-puppet/docker-puppet.sh:z --entrypoint /var/lib/docker-puppet/docker-puppet.sh --net host --volume /etc/hosts:/etc/hosts:ro 192.168.24.1:8787/rhosp14/openstack-horizon:2018-08-17.2", > "2018-08-20 10:28:41,982 DEBUG: 28339 -- Notice: hiera(): Cannot load backend module_data: cannot load such file -- hiera/backend/module_data_backend", > "Notice: Compiled catalog for controller-0.localdomain in environment production in 1.95 seconds", > "Notice: /Stage[main]/Apache::Mod::Remoteip/File[remoteip.conf]/ensure: defined content as '{md5}5e70f28d6cca0d978242202de6e8e0e3'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon]/mode: mode changed '0750' to '0751'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/var/log/horizon/horizon.log]/ensure: created", > "Notice: /Stage[main]/Apache/Concat[/etc/httpd/conf/ports.conf]/File[/etc/httpd/conf/ports.conf]/ensure: defined content as '{md5}05a4d6cbec792391f771b5d1a68687d9'", > "Notice: /Stage[main]/Apache::Mod::Remoteip/Apache::Mod[remoteip]/File[remoteip.load]/ensure: defined content as '{md5}118eb7518a1d018a162d23dfe32c4bad'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/content: content changed '{md5}08ef627d85a561822cb014a16d6ae78a' to '{md5}080e6b861449e4392bab36cc5bea00e9'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/owner: owner changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon/Concat[/etc/openstack-dashboard/local_settings]/File[/etc/openstack-dashboard/local_settings]/group: group changed 'horizon' to 'apache'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/File[/etc/httpd/conf.d/openstack-dashboard.conf]/content: content changed '{md5}4cb4b1391d3553951208fad1ce791e5c' to '{md5}3f4b1c53d0e150dae37b3ee5dcaf622d'", > "Notice: /Stage[main]/Horizon::Wsgi::Apache/Apache::Vhost[horizon_vhost]/Concat[10-horizon_vhost.conf]/File[/etc/httpd/conf.d/10-horizon_vhost.conf]/ensure: defined content as '{md5}bc5cb3b80367d89e79e323750fcbb4f0'", > "Notice: Applied catalog in 0.55 seconds", > " Total: 86", > " Success: 86", > " Total: 172", > " Out of sync: 84", > " Changed: 84", > " File: 0.21", > " Last run: 1534760921", > " Config retrieval: 2.28", > " Total: 2.50", > " Config: 1534760918", > "Gathering files modified after 2018-08-20 10:28:32.330936423 +0000", > "2018-08-20 10:28:41,982 DEBUG: 28339 -- + mkdir -p /etc/puppet", > "+ '[' -n file,file_line,concat,augeas,cron,horizon_config ']'", > "+ TAGS='--tags file,file_line,concat,augeas,cron,horizon_config'", > "+ origin_of_time=/var/lib/config-data/horizon.origin_of_time", > "+ touch /var/lib/config-data/horizon.origin_of_time", > "+ /usr/bin/puppet apply --summarize --detailed-exitcodes --color=false --logdest syslog --logdest console --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules --tags file,file_line,concat,augeas,cron,horizon_config /etc/config.pp", > "Warning: This method is deprecated, please use match expressions with Stdlib::Compat::Ipv6 instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at [\"/etc/puppet/modules/tripleo/manifests/profile/base/horizon.pp\", 97]:[\"/etc/config.pp\", 2]", > "Warning: ModuleLoader: module 'horizon' has unresolved dependencies - it will only see those that are resolved. Use 'puppet module list --tree' to see information about modules", > "Warning: Undefined variable ''; ", > " with Stdlib::Compat::Hash. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 589]:[\"/etc/config.pp\", 2]", > " with Pattern[]. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 590]:[\"/etc/config.pp\", 2]", > " with Stdlib::Compat::Absolute_Path. There is further documentation for validate_legacy function in the README. at [\"/etc/puppet/modules/horizon/manifests/init.pp\", 592]:[\"/etc/config.pp\", 2]", > "+ rsync -a -R --delay-updates --delete-after /etc /root /opt /var/www /var/spool/cron /var/lib/config-data/horizon", > "++ stat -c %y /var/lib/config-data/horizon.origin_of_time", > "+ echo 'Gathering files modified after 2018-08-20 10:28:32.330936423 +0000'", > "+ mkdir -p /var/lib/config-data/puppet-generated/horizon", > "+ rsync -a -R -0 --delay-updates --delete-after --files-from=/dev/fd/63 / /var/lib/config-data/puppet-generated/horizon", > "++ find /etc /root /opt /var/www /var/spool/cron -newer /var/lib/config-data/horizon.origin_of_time -not -path '/etc/puppet*' -print0", > "+ tar -c -f - /var/lib/config-data/horizon --mtime=1970-01-01", > "+ tar -c -f - /var/lib/config-data/puppet-generated/horizon --mtime=1970-01-01", > "2018-08-20 10:28:41,982 INFO: 28339 -- Removing container: docker-puppet-horizon", > "2018-08-20 10:28:42,025 DEBUG: 28339 -- docker-puppet-horizon", > "2018-08-20 10:28:42,026 INFO: 28339 -- Finished processing puppet configs for horizon", > "2018-08-20 10:28:42,027 DEBUG: 28337 -- CONFIG_VOLUME_PREFIX: /var/lib/config-data", > "2018-08-20 10:28:42,027 DEBUG: 28337 -- STARTUP_CONFIG_PATTERN: /var/lib/tripleo-config/docker-container-startup-config-step_*.json", > "2018-08-20 10:28:42,029 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-08-20 10:28:42,029 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-08-20 10:28:42,029 DEBUG: 28337 -- Updating config hash for mysql_bootstrap, config_volume=heat_api_cfn hash=53a2109fab9262afe3340cddaca1325f", > "2018-08-20 10:28:42,029 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-08-20 10:28:42,029 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-08-20 10:28:42,029 DEBUG: 28337 -- Updating config hash for rabbitmq_bootstrap, config_volume=heat_api_cfn hash=6e540f3c973b62d1f7942d76d6553261", > "2018-08-20 10:28:42,030 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/memcached/etc/sysconfig.md5sum for config_volume /var/lib/config-data/memcached/etc/sysconfig", > "2018-08-20 10:28:42,031 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-08-20 10:28:42,031 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/nova_placement.md5sum for config_volume /var/lib/config-data/puppet-generated/nova_placement", > "2018-08-20 10:28:42,031 DEBUG: 28337 -- Updating config hash for nova_placement, config_volume=heat_api_cfn hash=baeb99aff2a632edc250851ed9cc9617", > "2018-08-20 10:28:42,032 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,032 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,032 DEBUG: 28337 -- Updating config hash for swift_rsync_fix, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,032 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-08-20 10:28:42,032 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-08-20 10:28:42,032 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/heat/etc/heat.md5sum for config_volume /var/lib/config-data/heat/etc/heat", > "2018-08-20 10:28:42,032 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/heat/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/heat/etc/my.cnf.d", > "2018-08-20 10:28:42,032 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data.md5sum for config_volume /var/lib/config-data", > "2018-08-20 10:28:42,032 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/swift/etc", > "2018-08-20 10:28:42,033 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-08-20 10:28:42,033 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/keystone.md5sum for config_volume /var/lib/config-data/puppet-generated/keystone", > "2018-08-20 10:28:42,033 DEBUG: 28337 -- Updating config hash for keystone_cron, config_volume=heat_api_cfn hash=d821c407ac32ca1b434b4f59857e48c2", > "2018-08-20 10:28:42,033 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/panko/etc.md5sum for config_volume /var/lib/config-data/panko/etc", > "2018-08-20 10:28:42,033 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/panko/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/panko/etc/my.cnf.d", > "2018-08-20 10:28:42,033 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-08-20 10:28:42,033 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-08-20 10:28:42,033 DEBUG: 28337 -- Updating config hash for keystone_db_sync, config_volume=heat_api_cfn hash=d821c407ac32ca1b434b4f59857e48c2", > "2018-08-20 10:28:42,033 DEBUG: 28337 -- Updating config hash for keystone, config_volume=heat_api_cfn hash=d821c407ac32ca1b434b4f59857e48c2", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/aodh/etc/aodh.md5sum for config_volume /var/lib/config-data/aodh/etc/aodh", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/aodh/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/aodh/etc/my.cnf.d", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Updating config hash for neutron_ovs_bridge, config_volume=heat_api_cfn hash=0c84b9113df5acecca97d6f519c78f87", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/cinder/etc/cinder.md5sum for config_volume /var/lib/config-data/cinder/etc/cinder", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/cinder/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/cinder/etc/my.cnf.d", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Updating config hash for glance_api_db_sync, config_volume=heat_api_cfn hash=83be9d0a7ff400ceca794271a71b3964", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/neutron/etc.md5sum for config_volume /var/lib/config-data/neutron/etc", > "2018-08-20 10:28:42,034 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/neutron/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/neutron/etc/my.cnf.d", > "2018-08-20 10:28:42,035 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/neutron/usr/share.md5sum for config_volume /var/lib/config-data/neutron/usr/share", > "2018-08-20 10:28:42,035 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/sahara/etc/sahara.md5sum for config_volume /var/lib/config-data/sahara/etc/sahara", > "2018-08-20 10:28:42,035 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-08-20 10:28:42,035 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/horizon.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon", > "2018-08-20 10:28:42,035 DEBUG: 28337 -- Updating config hash for horizon, config_volume=heat_api_cfn hash=90db3620f5f63dba050bf03819820f8e", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/clustercheck.md5sum for config_volume /var/lib/config-data/puppet-generated/clustercheck", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Updating config hash for clustercheck, config_volume=heat_api_cfn hash=3a866db6ccf01b692acb2b878650b5e0", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/mysql.md5sum for config_volume /var/lib/config-data/puppet-generated/mysql", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Updating config hash for mysql_restart_bundle, config_volume=heat_api_cfn hash=53a2109fab9262afe3340cddaca1325f", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/haproxy.md5sum for config_volume /var/lib/config-data/puppet-generated/haproxy", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Updating config hash for haproxy_restart_bundle, config_volume=heat_api_cfn hash=aa52d44da98faea8cd7e63631bdfdfed", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/rabbitmq.md5sum for config_volume /var/lib/config-data/puppet-generated/rabbitmq", > "2018-08-20 10:28:42,037 DEBUG: 28337 -- Updating config hash for rabbitmq_restart_bundle, config_volume=heat_api_cfn hash=6e540f3c973b62d1f7942d76d6553261", > "2018-08-20 10:28:42,038 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/horizon/etc.md5sum for config_volume /var/lib/config-data/puppet-generated/horizon/etc", > "2018-08-20 10:28:42,038 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-08-20 10:28:42,038 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/redis.md5sum for config_volume /var/lib/config-data/puppet-generated/redis", > "2018-08-20 10:28:42,038 DEBUG: 28337 -- Updating config hash for redis_restart_bundle, config_volume=heat_api_cfn hash=cc50d33c616fb521f0495db373162265", > "2018-08-20 10:28:42,039 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,039 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Updating config hash for cinder_volume_restart_bundle, config_volume=heat_api_cfn hash=367641b86bd126aff95ee0c55a51fc3b", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Updating config hash for gnocchi_statsd, config_volume=heat_api_cfn hash=cbcc6276d9d721b7b3dae32718d01350", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Updating config hash for cinder_backup_restart_bundle, config_volume=heat_api_cfn hash=367641b86bd126aff95ee0c55a51fc3b", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Updating config hash for gnocchi_metricd, config_volume=heat_api_cfn hash=cbcc6276d9d721b7b3dae32718d01350", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/nova/etc/my.cnf.d.md5sum for config_volume /var/lib/config-data/nova/etc/my.cnf.d", > "2018-08-20 10:28:42,040 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/nova/etc/nova.md5sum for config_volume /var/lib/config-data/nova/etc/nova", > "2018-08-20 10:28:42,041 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/ceilometer/etc/ceilometer.md5sum for config_volume /var/lib/config-data/ceilometer/etc/ceilometer", > "2018-08-20 10:28:42,041 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-08-20 10:28:42,041 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/gnocchi.md5sum for config_volume /var/lib/config-data/puppet-generated/gnocchi", > "2018-08-20 10:28:42,041 DEBUG: 28337 -- Updating config hash for gnocchi_api, config_volume=heat_api_cfn hash=cbcc6276d9d721b7b3dae32718d01350", > "2018-08-20 10:28:42,041 DEBUG: 28337 -- Updating config hash for gnocchi_db_sync, config_volume=heat_api_cfn hash=cbcc6276d9d721b7b3dae32718d01350", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Updating config hash for swift_container_updater, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Updating config hash for aodh_evaluator, config_volume=heat_api_cfn hash=f311982a39e04b74f3edfe7925293276", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Updating config hash for nova_scheduler, config_volume=heat_api_cfn hash=158b7a99ccb3969772c2da84f97aaa8a", > "2018-08-20 10:28:42,043 DEBUG: 28337 -- Updating config hash for swift_object_server, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Updating config hash for cinder_api, config_volume=heat_api_cfn hash=367641b86bd126aff95ee0c55a51fc3b", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Updating config hash for swift_proxy, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Updating config hash for neutron_dhcp, config_volume=heat_api_cfn hash=0c84b9113df5acecca97d6f519c78f87", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Updating config hash for heat_api, config_volume=heat_api_cfn hash=230f8870e1c2ddaa58ab84d221badc5f", > "2018-08-20 10:28:42,044 DEBUG: 28337 -- Updating config hash for swift_object_auditor, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Updating config hash for neutron_metadata_agent, config_volume=heat_api_cfn hash=0c84b9113df5acecca97d6f519c78f87", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Updating config hash for ceilometer_agent_central, config_volume=heat_api_cfn hash=ff66d555cf93c16aef03dbc182bbbf16", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Updating config hash for swift_account_replicator, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Updating config hash for aodh_notifier, config_volume=heat_api_cfn hash=f311982a39e04b74f3edfe7925293276", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,045 DEBUG: 28337 -- Updating config hash for nova_api_cron, config_volume=heat_api_cfn hash=158b7a99ccb3969772c2da84f97aaa8a", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Updating config hash for nova_consoleauth, config_volume=heat_api_cfn hash=158b7a99ccb3969772c2da84f97aaa8a", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/glance_api.md5sum for config_volume /var/lib/config-data/puppet-generated/glance_api", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Updating config hash for glance_api, config_volume=heat_api_cfn hash=83be9d0a7ff400ceca794271a71b3964", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Updating config hash for swift_account_reaper, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/ceilometer.md5sum for config_volume /var/lib/config-data/puppet-generated/ceilometer", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-08-20 10:28:42,046 DEBUG: 28337 -- Updating config hash for ceilometer_agent_notification, config_volume=heat_api_cfn hash=ff66d555cf93c16aef03dbc182bbbf16-7da0b4ed8224d06139116ef2dc86ad6d", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Updating config hash for nova_vnc_proxy, config_volume=heat_api_cfn hash=158b7a99ccb3969772c2da84f97aaa8a", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Updating config hash for swift_rsync, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Updating config hash for nova_api, config_volume=heat_api_cfn hash=158b7a99ccb3969772c2da84f97aaa8a", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Updating config hash for aodh_api, config_volume=heat_api_cfn hash=f311982a39e04b74f3edfe7925293276", > "2018-08-20 10:28:42,047 DEBUG: 28337 -- Updating config hash for nova_metadata, config_volume=heat_api_cfn hash=158b7a99ccb3969772c2da84f97aaa8a", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/heat.md5sum for config_volume /var/lib/config-data/puppet-generated/heat", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Updating config hash for heat_engine, config_volume=heat_api_cfn hash=f74f042c7d7695771f23e42ae47636c9", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Updating config hash for swift_container_server, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Updating config hash for swift_object_replicator, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Updating config hash for neutron_l3_agent, config_volume=heat_api_cfn hash=0c84b9113df5acecca97d6f519c78f87", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,048 DEBUG: 28337 -- Updating config hash for cinder_scheduler, config_volume=heat_api_cfn hash=367641b86bd126aff95ee0c55a51fc3b", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/nova.md5sum for config_volume /var/lib/config-data/puppet-generated/nova", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Updating config hash for nova_conductor, config_volume=heat_api_cfn hash=158b7a99ccb3969772c2da84f97aaa8a", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api_cfn.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api_cfn", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Updating config hash for heat_api_cfn, config_volume=heat_api_cfn hash=ce0cce9238398029e71fafd3aa3a5e08", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/sahara.md5sum for config_volume /var/lib/config-data/puppet-generated/sahara", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Updating config hash for sahara_api, config_volume=heat_api_cfn hash=746c7f35bc3f7042187519e709a2d662", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Updating config hash for sahara_engine, config_volume=heat_api_cfn hash=746c7f35bc3f7042187519e709a2d662", > "2018-08-20 10:28:42,049 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Updating config hash for neutron_ovs_agent, config_volume=heat_api_cfn hash=0c84b9113df5acecca97d6f519c78f87", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/cinder.md5sum for config_volume /var/lib/config-data/puppet-generated/cinder", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Updating config hash for cinder_api_cron, config_volume=heat_api_cfn hash=367641b86bd126aff95ee0c55a51fc3b", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Updating config hash for swift_account_auditor, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Updating config hash for swift_container_replicator, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,050 DEBUG: 28337 -- Updating config hash for swift_object_updater, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Updating config hash for swift_object_expirer, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/heat_api.md5sum for config_volume /var/lib/config-data/puppet-generated/heat_api", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Updating config hash for heat_api_cron, config_volume=heat_api_cfn hash=230f8870e1c2ddaa58ab84d221badc5f", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Updating config hash for swift_container_auditor, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/panko.md5sum for config_volume /var/lib/config-data/puppet-generated/panko", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Updating config hash for panko_api, config_volume=heat_api_cfn hash=7da0b4ed8224d06139116ef2dc86ad6d", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-08-20 10:28:42,051 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/aodh.md5sum for config_volume /var/lib/config-data/puppet-generated/aodh", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Updating config hash for aodh_listener, config_volume=heat_api_cfn hash=f311982a39e04b74f3edfe7925293276", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/neutron.md5sum for config_volume /var/lib/config-data/puppet-generated/neutron", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Updating config hash for neutron_api, config_volume=heat_api_cfn hash=0c84b9113df5acecca97d6f519c78f87", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/swift.md5sum for config_volume /var/lib/config-data/puppet-generated/swift", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Updating config hash for swift_account_server, config_volume=heat_api_cfn hash=720092ccd254db13c7fdae053af72b87", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Looking for hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Got hashfile /var/lib/config-data/puppet-generated/crond.md5sum for config_volume /var/lib/config-data/puppet-generated/crond", > "2018-08-20 10:28:42,052 DEBUG: 28337 -- Updating config hash for logrotate_crond, config_volume=heat_api_cfn hash=f698ba12bc53b0b597cb3a0b7e7f728f" > ] >} >2018-08-20 06:28:43,670 p=1013 u=mistral | TASK [Start containers for step 1] ********************************************* >2018-08-20 06:28:43,670 p=1013 u=mistral | Monday 20 August 2018 06:28:43 -0400 (0:00:01.341) 0:09:25.700 ********* >2018-08-20 06:28:44,264 p=1013 u=mistral | ok: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:28:44,278 p=1013 u=mistral | ok: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:29:12,290 p=1013 u=mistral | ok: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:29:12,318 p=1013 u=mistral | TASK [Debug output for task which failed: Start containers for step 1] ********* >2018-08-20 06:29:12,319 p=1013 u=mistral | Monday 20 August 2018 06:29:12 -0400 (0:00:28.648) 0:09:54.349 ********* >2018-08-20 06:29:12,391 p=1013 u=mistral | ok: [controller-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [ > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-backup ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-backup", > "378837c0e24a: Already exists", > "e17262bc2341: Already exists", > "b0b426385936: Already exists", > "bfd71860b3fc: Already exists", > "c086fc84b8c8: Already exists", > "0e36e709a73b: Pulling fs layer", > "0e36e709a73b: Verifying Checksum", > "0e36e709a73b: Download complete", > "0e36e709a73b: Pull complete", > "Digest: sha256:5db4fd8ddd7e184492ae660d0bb92d76e4bdb1c0e59ec5e6b297607ecec5f96e", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-backup:2018-08-17.2", > "", > "stderr: ", > "stdout: Trying to pull repository 192.168.24.1:8787/rhosp14/openstack-cinder-volume ... ", > "2018-08-17.2: Pulling from 192.168.24.1:8787/rhosp14/openstack-cinder-volume", > "a1e979829068: Pulling fs layer", > "a1e979829068: Verifying Checksum", > "a1e979829068: Download complete", > "a1e979829068: Pull complete", > "Digest: sha256:3ec86820947b54e4fee57de992494a83228e65d456a1df67c8221379f22268b8", > "Status: Downloaded newer image for 192.168.24.1:8787/rhosp14/openstack-cinder-volume:2018-08-17.2", > "stdout: ", > "stdout: a110acfe5f9a0e2858819e4276d25563380b892219be855b2b3445b80970b62c", > "stdout: ba7153c7ac06d0715a880a477d85e32bb933080916b9707e537b83453dc97935", > "stdout: Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...", > "OK", > "Filling help tables...", > "Creating OpenGIS required SP-s...", > "To start mysqld at boot time you have to copy", > "support-files/mysql.server to the right place for your system", > "PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !", > "To do so, start the server, then issue the following commands:", > "'/usr/bin/mysqladmin' -u root password 'new-password'", > "'/usr/bin/mysqladmin' -u root -h controller-0 password 'new-password'", > "Alternatively you can run:", > "'/usr/bin/mysql_secure_installation'", > "which will also give you the option of removing the test", > "databases and anonymous user created by default. This is", > "strongly recommended for production servers.", > "See the MariaDB Knowledgebase at http://mariadb.com/kb or the", > "MySQL manual for more instructions.", > "You can start the MariaDB daemon with:", > "cd '/usr' ; /usr/bin/mysqld_safe --datadir='/var/lib/mysql'", > "You can test the MariaDB daemon with mysql-test-run.pl", > "cd '/usr/mysql-test' ; perl mysql-test-run.pl", > "Please report any problems at http://mariadb.org/jira", > "The latest information about MariaDB is available at http://mariadb.org/.", > "You can find additional information about the MySQL part at:", > "http://dev.mysql.com", > "Consider joining MariaDB's strong and vibrant community:", > "https://mariadb.org/get-involved/", > "180820 10:29:03 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180820 10:29:03 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "spawn mysql_secure_installation", > "NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB", > " SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!", > "In order to log into MariaDB to secure it, we'll need the current", > "password for the root user. If you've just installed MariaDB, and", > "you haven't set the root password yet, the password will be blank,", > "so you should just press enter here.", > "Enter current password for root (enter for none): ", > "OK, successfully used password, moving on...", > "Setting the root password ensures that nobody can log into the MariaDB", > "root user without the proper authorisation.", > "Set root password? [Y/n] y", > "New password: ", > "Re-enter new password: ", > "Password updated successfully!", > "Reloading privilege tables..", > " ... Success!", > "By default, a MariaDB installation has an anonymous user, allowing anyone", > "to log into MariaDB without having to have a user account created for", > "them. This is intended only for testing, and to make the installation", > "go a bit smoother. You should remove them before moving into a", > "production environment.", > "Remove anonymous users? [Y/n] y", > "Normally, root should only be allowed to connect from 'localhost'. This", > "ensures that someone cannot guess at the root password from the network.", > "Disallow root login remotely? [Y/n] n", > " ... skipping.", > "By default, MariaDB comes with a database named 'test' that anyone can", > "access. This is also intended only for testing, and should be removed", > "before moving into a production environment.", > "Remove test database and access to it? [Y/n] y", > " - Dropping test database...", > " - Removing privileges on test database...", > "Reloading the privilege tables will ensure that all changes made so far", > "will take effect immediately.", > "Reload privilege tables now? [Y/n] y", > "Cleaning up...", > "All done! If you've completed all of the above steps, your MariaDB", > "installation should now be secure.", > "Thanks for using MariaDB!", > "180820 10:29:06 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "180820 10:29:07 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.", > "180820 10:29:07 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql", > "mysqld is alive", > "180820 10:29:10 mysqld_safe mysqld from pid file /var/lib/mysql/mariadb.pid ended", > "stderr: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", > "INFO:__main__:Validating config file", > "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", > "INFO:__main__:Copying service configuration files", > "INFO:__main__:Copying /dev/null to /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Setting permission for /etc/libqb/force-filesystem-sockets", > "INFO:__main__:Deleting /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/my.cnf.d/galera.cnf to /etc/my.cnf.d/galera.cnf", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/etc/sysconfig/clustercheck to /etc/sysconfig/clustercheck", > "INFO:__main__:Copying /var/lib/kolla/config_files/src/root/.my.cnf to /root/.my.cnf", > "INFO:__main__:Writing out command to execute", > "2018-08-20 10:28:50 140159377500352 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-08-20 10:28:50 140159377500352 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 42 ...", > "2018-08-20 10:28:54 139934433298624 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-08-20 10:28:54 139934433298624 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 71 ...", > "2018-08-20 10:28:59 140332058388672 [Warning] option 'open_files_limit': unsigned value 18446744073709551615 adjusted to 4294967295", > "2018-08-20 10:28:59 140332058388672 [Note] /usr/libexec/mysqld (mysqld 10.1.20-MariaDB) starting as process 101 ...", > "/usr/bin/mysqld_safe: line 755: ulimit: -1: invalid option", > "ulimit: usage: ulimit [-SHacdefilmnpqrstuvx] [limit]" > ] >} >2018-08-20 06:29:12,416 p=1013 u=mistral | ok: [compute-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-08-20 06:29:12,433 p=1013 u=mistral | ok: [ceph-0] => { > "failed_when_result": false, > "outputs.stdout_lines | default([]) | union(outputs.stderr_lines | default([]))": [] >} >2018-08-20 06:29:12,456 p=1013 u=mistral | TASK [Check if /var/lib/docker-puppet/docker-puppet-tasks1.json exists] ******** >2018-08-20 06:29:12,456 p=1013 u=mistral | Monday 20 August 2018 06:29:12 -0400 (0:00:00.137) 0:09:54.486 ********* >2018-08-20 06:29:12,673 p=1013 u=mistral | ok: [controller-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:29:12,688 p=1013 u=mistral | ok: [compute-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:29:12,727 p=1013 u=mistral | ok: [ceph-0] => {"changed": false, "stat": {"exists": false}} >2018-08-20 06:29:12,755 p=1013 u=mistral | TASK [Run docker-puppet tasks (bootstrap tasks) for step 1] ******************** >2018-08-20 06:29:12,755 p=1013 u=mistral | Monday 20 August 2018 06:29:12 -0400 (0:00:00.298) 0:09:54.785 ********* >2018-08-20 06:29:12,787 p=1013 u=mistral | skipping: [controller-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:29:12,815 p=1013 u=mistral | skipping: [compute-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:29:12,829 p=1013 u=mistral | skipping: [ceph-0] => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} >2018-08-20 06:29:12,855 p=1013 u=mistral | TASK [Debug output for task which failed: Run docker-puppet tasks (bootstrap tasks) for step 1] *** >2018-08-20 06:29:12,855 p=1013 u=mistral | Monday 20 August 2018 06:29:12 -0400 (0:00:00.100) 0:09:54.885 ********* >2018-08-20 06:29:12,887 p=1013 u=mistral | skipping: [controller-0] => {} >2018-08-20 06:29:12,924 p=1013 u=mistral | skipping: [compute-0] => {} >2018-08-20 06:29:12,944 p=1013 u=mistral | skipping: [ceph-0] => {} >2018-08-20 06:29:12,951 p=1013 u=mistral | PLAY [External deployment step 2] ********************************************** >2018-08-20 06:29:12,971 p=1013 u=mistral | TASK [set blacklisted_hostnames] *********************************************** >2018-08-20 06:29:12,971 p=1013 u=mistral | Monday 20 August 2018 06:29:12 -0400 (0:00:00.115) 0:09:55.001 ********* >2018-08-20 06:29:12,993 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,010 p=1013 u=mistral | TASK [create ceph-ansible temp dirs] ******************************************* >2018-08-20 06:29:13,010 p=1013 u=mistral | Monday 20 August 2018 06:29:13 -0400 (0:00:00.039) 0:09:55.040 ********* >2018-08-20 06:29:13,041 p=1013 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/group_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/group_vars", "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,050 p=1013 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/host_vars) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/host_vars", "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,051 p=1013 u=mistral | skipping: [undercloud] => (item=/var/lib/mistral/overcloud/ceph-ansible/fetch_dir) => {"changed": false, "item": "/var/lib/mistral/overcloud/ceph-ansible/fetch_dir", "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,066 p=1013 u=mistral | TASK [generate inventory] ****************************************************** >2018-08-20 06:29:13,067 p=1013 u=mistral | Monday 20 August 2018 06:29:13 -0400 (0:00:00.056) 0:09:55.097 ********* >2018-08-20 06:29:13,091 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,108 p=1013 u=mistral | TASK [set ceph-ansible group vars all] ***************************************** >2018-08-20 06:29:13,108 p=1013 u=mistral | Monday 20 August 2018 06:29:13 -0400 (0:00:00.041) 0:09:55.138 ********* >2018-08-20 06:29:13,134 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,155 p=1013 u=mistral | TASK [generate ceph-ansible group vars all] ************************************ >2018-08-20 06:29:13,156 p=1013 u=mistral | Monday 20 August 2018 06:29:13 -0400 (0:00:00.047) 0:09:55.186 ********* >2018-08-20 06:29:13,190 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,207 p=1013 u=mistral | TASK [set ceph-ansible extra vars] ********************************************* >2018-08-20 06:29:13,207 p=1013 u=mistral | Monday 20 August 2018 06:29:13 -0400 (0:00:00.051) 0:09:55.237 ********* >2018-08-20 06:29:13,233 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,257 p=1013 u=mistral | TASK [generate ceph-ansible extra vars] **************************************** >2018-08-20 06:29:13,258 p=1013 u=mistral | Monday 20 August 2018 06:29:13 -0400 (0:00:00.050) 0:09:55.288 ********* >2018-08-20 06:29:13,287 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,300 p=1013 u=mistral | TASK [generate nodes-uuid data file] ******************************************* >2018-08-20 06:29:13,300 p=1013 u=mistral | Monday 20 August 2018 06:29:13 -0400 (0:00:00.042) 0:09:55.330 ********* >2018-08-20 06:29:13,321 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,335 p=1013 u=mistral | TASK [generate nodes-uuid playbook] ******************************************** >2018-08-20 06:29:13,335 p=1013 u=mistral | Monday 20 August 2018 06:29:13 -0400 (0:00:00.034) 0:09:55.365 ********* >2018-08-20 06:29:13,355 p=1013 u=mistral | skipping: [undercloud] => {"changed": false, "skip_reason": "Conditional result was False"} >2018-08-20 06:29:13,372 p=1013 u=mistral | TASK [run nodes-uuid] ********************************************************** >2018-08-20 06:29:13,372 p=1013 u=mistral | Monday 20 August 2018 06:29:13 -0400 (0:00:00.037) 0:09:55.402 ********* >2018-08-20 06:29:15,839 p=1013 u=mistral | changed: [undercloud] => {"changed": true, "cmd": "ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_command.log\" ANSIBLE_CONFIG=\"/var/lib/mistral/overcloud/ansible.cfg\" ANSIBLE_REMOTE_TEMP=/tmp/nodes_uuid_tmp ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml /var/lib/mistral/overcloud/ceph-ansible/nodes_uuid_playbook.yml", "delta": "0:00:02.257579", "end": "2018-08-20 06:29:15.820447", "rc": 0, "start": "2018-08-20 06:29:13.562868", "stderr": "", "stderr_lines": [], "stdout": "\nPLAY [all] *********************************************************************\n\nTASK [set nodes data] **********************************************************\nMonday 20 August 2018 06:29:14 -0400 (0:00:00.072) 0:00:00.072 ********* \nok: [compute-0]\nok: [ceph-0]\nok: [controller-0]\n\nTASK [register machine id] *****************************************************\nMonday 20 August 2018 06:29:14 -0400 (0:00:00.081) 0:00:00.154 ********* \nchanged: [ceph-0]\nchanged: [controller-0]\nchanged: [compute-0]\n\nTASK [generate host vars from nodes data] **************************************\nMonday 20 August 2018 06:29:15 -0400 (0:00:00.313) 0:00:00.468 ********* \nchanged: [controller-0 -> localhost]\nchanged: [ceph-0 -> localhost]\nchanged: [compute-0 -> localhost]\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=3 changed=2 unreachable=0 failed=0 \ncompute-0 : ok=3 changed=2 unreachable=0 failed=0 \ncontroller-0 : ok=3 changed=2 unreachable=0 failed=0 \n\nMonday 20 August 2018 06:29:15 -0400 (0:00:00.581) 0:00:01.049 ********* \n=============================================================================== ", "stdout_lines": ["", "PLAY [all] *********************************************************************", "", "TASK [set nodes data] **********************************************************", "Monday 20 August 2018 06:29:14 -0400 (0:00:00.072) 0:00:00.072 ********* ", "ok: [compute-0]", "ok: [ceph-0]", "ok: [controller-0]", "", "TASK [register machine id] *****************************************************", "Monday 20 August 2018 06:29:14 -0400 (0:00:00.081) 0:00:00.154 ********* ", "changed: [ceph-0]", "changed: [controller-0]", "changed: [compute-0]", "", "TASK [generate host vars from nodes data] **************************************", "Monday 20 August 2018 06:29:15 -0400 (0:00:00.313) 0:00:00.468 ********* ", "changed: [controller-0 -> localhost]", "changed: [ceph-0 -> localhost]", "changed: [compute-0 -> localhost]", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=3 changed=2 unreachable=0 failed=0 ", "compute-0 : ok=3 changed=2 unreachable=0 failed=0 ", "controller-0 : ok=3 changed=2 unreachable=0 failed=0 ", "", "Monday 20 August 2018 06:29:15 -0400 (0:00:00.581) 0:00:01.049 ********* ", "=============================================================================== "]} >2018-08-20 06:29:15,854 p=1013 u=mistral | TASK [set ceph-ansible verbosity] ********************************************** >2018-08-20 06:29:15,854 p=1013 u=mistral | Monday 20 August 2018 06:29:15 -0400 (0:00:02.481) 0:09:57.884 ********* >2018-08-20 06:29:15,886 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_playbook_verbosity": 2}, "changed": false} >2018-08-20 06:29:15,900 p=1013 u=mistral | TASK [set ceph-ansible command] ************************************************ >2018-08-20 06:29:15,900 p=1013 u=mistral | Monday 20 August 2018 06:29:15 -0400 (0:00:00.046) 0:09:57.930 ********* >2018-08-20 06:29:15,939 p=1013 u=mistral | ok: [undercloud] => {"ansible_facts": {"ceph_ansible_command": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_CALLBACK_PLUGINS=/usr/share/ceph-ansible/plugins/callback/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ANSIBLE_REMOTE_TEMP=/tmp/ceph_ansible_tmp ANSIBLE_FORKS=25 ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml"}, "changed": false} >2018-08-20 06:29:15,952 p=1013 u=mistral | TASK [run ceph-ansible] ******************************************************** >2018-08-20 06:29:15,952 p=1013 u=mistral | Monday 20 August 2018 06:29:15 -0400 (0:00:00.051) 0:09:57.982 ********* >2018-08-20 06:32:21,389 p=1013 u=mistral | failed: [undercloud] (item=/usr/share/ceph-ansible/site-docker.yml.sample) => {"changed": true, "cmd": "ANSIBLE_ACTION_PLUGINS=/usr/share/ceph-ansible/plugins/actions/ ANSIBLE_CALLBACK_PLUGINS=/usr/share/ceph-ansible/plugins/callback/ ANSIBLE_ROLES_PATH=/usr/share/ceph-ansible/roles/ ANSIBLE_LOG_PATH=\"/var/lib/mistral/overcloud/ceph-ansible/ceph_ansible_command.log\" ANSIBLE_LIBRARY=/usr/share/ceph-ansible/library/ ANSIBLE_CONFIG=/usr/share/ceph-ansible/ansible.cfg ANSIBLE_REMOTE_TEMP=/tmp/ceph_ansible_tmp ANSIBLE_FORKS=25 ansible-playbook --private-key /var/lib/mistral/overcloud/ssh_private_key -vv --skip-tags package-install,with_pkg -i /var/lib/mistral/overcloud/ceph-ansible/inventory.yml --extra-vars @/var/lib/mistral/overcloud/ceph-ansible/extra_vars.yml /usr/share/ceph-ansible/site-docker.yml.sample", "delta": "0:03:05.046839", "end": "2018-08-20 06:32:21.139509", "item": "/usr/share/ceph-ansible/site-docker.yml.sample", "msg": "non-zero return code", "rc": 2, "start": "2018-08-20 06:29:16.092670", "stderr": "[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \nThis feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Could not match supplied host pattern, ignoring: agents\n [WARNING]: Could not match supplied host pattern, ignoring: mdss\n [WARNING]: Could not match supplied host pattern, ignoring: rgws\n [WARNING]: Could not match supplied host pattern, ignoring: nfss\n [WARNING]: Could not match supplied host pattern, ignoring: restapis\n [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors\n [WARNING]: Could not match supplied host pattern, ignoring: iscsigws\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ inventory_hostname ==\ngroups[mon_group_name][0] }}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n [WARNING]: when statements should not include jinja2 templating delimiters\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\n}}\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|search` instead use `result is search`. This feature will be removed in\n version 2.9. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using \n`result|version_compare` instead use `result is version_compare`. This feature \nwill be removed in version 2.9. Deprecation warnings can be disabled by setting\n deprecation_warnings=False in ansible.cfg.", "stderr_lines": ["[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use ", "'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. ", "This feature will be removed in a future release. Deprecation warnings can be ", "disabled by setting deprecation_warnings=False in ansible.cfg.", " [WARNING]: Could not match supplied host pattern, ignoring: agents", " [WARNING]: Could not match supplied host pattern, ignoring: mdss", " [WARNING]: Could not match supplied host pattern, ignoring: rgws", " [WARNING]: Could not match supplied host pattern, ignoring: nfss", " [WARNING]: Could not match supplied host pattern, ignoring: restapis", " [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors", " [WARNING]: Could not match supplied host pattern, ignoring: iscsigws", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ inventory_hostname ==", "groups[mon_group_name][0] }}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", " [WARNING]: when statements should not include jinja2 templating delimiters", "such as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0", "}}", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|search` instead use `result is search`. This feature will be removed in", " version 2.9. Deprecation warnings can be disabled by setting ", "deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg.", "[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using ", "`result|version_compare` instead use `result is version_compare`. This feature ", "will be removed in version 2.9. Deprecation warnings can be disabled by setting", " deprecation_warnings=False in ansible.cfg."], "stdout": "ansible-playbook 2.5.7\n config file = /usr/share/ceph-ansible/ansible.cfg\n configured module search path = [u'/usr/share/ceph-ansible/library']\n ansible python module location = /usr/lib/python2.7/site-packages/ansible\n executable location = /usr/bin/ansible-playbook\n python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]\nUsing /usr/share/ceph-ansible/ansible.cfg as config file\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml\nstatically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml\n\nPLAYBOOK: site-docker.yml.sample ***********************************************\n12 plays in /usr/share/ceph-ansible/site-docker.yml.sample\n\nPLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***\n\nTASK [gather facts] ************************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:24\nMonday 20 August 2018 06:29:19 -0400 (0:00:00.200) 0:00:00.200 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [gather and delegate facts] ***********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:29\nMonday 20 August 2018 06:29:19 -0400 (0:00:00.083) 0:00:00.283 ********* \nok: [controller-0 -> 192.168.24.13] => (item=compute-0)\nok: [controller-0 -> 192.168.24.12] => (item=controller-0)\nok: [controller-0 -> 192.168.24.16] => (item=ceph-0)\n\nTASK [check if it is atomic host] **********************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:38\nMonday 20 August 2018 06:29:31 -0400 (0:00:12.110) 0:00:12.394 ********* \nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\nok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [set_fact is_atomic] ******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:45\nMonday 20 August 2018 06:29:32 -0400 (0:00:00.521) 0:00:12.915 ********* \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nTASK [pull rhceph image] *******************************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:66\nMonday 20 August 2018 06:29:32 -0400 (0:00:00.175) 0:00:13.090 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nskipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:76\nMonday 20 August 2018 06:29:32 -0400 (0:00:00.115) 0:00:13.205 ********* \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180820062932Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nMonday 20 August 2018 06:29:32 -0400 (0:00:00.169) 0:00:13.375 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.028304\", \"end\": \"2018-08-20 10:29:33.155728\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:29:33.127424\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.560) 0:00:13.935 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.049) 0:00:13.984 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.047) 0:00:14.032 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.045) 0:00:14.077 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.023435\", \"end\": \"2018-08-20 10:29:33.631166\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:29:33.607731\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.257) 0:00:14.334 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.048) 0:00:14.383 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.047) 0:00:14.430 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.045) 0:00:14.475 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.045) 0:00:14.520 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.046) 0:00:14.566 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nMonday 20 August 2018 06:29:33 -0400 (0:00:00.046) 0:00:14.612 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.067) 0:00:14.680 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.047) 0:00:14.728 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.045) 0:00:14.774 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.050) 0:00:14.824 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.043) 0:00:14.867 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.057) 0:00:14.925 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.058) 0:00:14.983 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.053) 0:00:15.037 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.060) 0:00:15.098 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.057) 0:00:15.155 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.053) 0:00:15.208 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.070) 0:00:15.279 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.058) 0:00:15.338 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.050) 0:00:15.388 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.047) 0:00:15.435 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.046) 0:00:15.481 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nMonday 20 August 2018 06:29:34 -0400 (0:00:00.046) 0:00:15.528 ********* \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nMonday 20 August 2018 06:29:35 -0400 (0:00:00.211) 0:00:15.740 ********* \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nMonday 20 August 2018 06:29:35 -0400 (0:00:00.073) 0:00:15.813 ********* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nMonday 20 August 2018 06:29:35 -0400 (0:00:00.079) 0:00:15.893 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nMonday 20 August 2018 06:29:35 -0400 (0:00:00.071) 0:00:15.965 ********* \nok: [controller-0 -> 192.168.24.12] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nMonday 20 August 2018 06:29:35 -0400 (0:00:00.136) 0:00:16.101 ********* \nok: [controller-0 -> 192.168.24.12] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.026050\", \"end\": \"2018-08-20 10:29:35.659042\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-08-20 10:29:35.632992\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nMonday 20 August 2018 06:29:35 -0400 (0:00:00.265) 0:00:16.367 ********* \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nMonday 20 August 2018 06:29:35 -0400 (0:00:00.198) 0:00:16.566 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nMonday 20 August 2018 06:29:35 -0400 (0:00:00.058) 0:00:16.625 ********* \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.387) 0:00:17.012 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.052) 0:00:17.065 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.076) 0:00:17.142 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.048) 0:00:17.190 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.052) 0:00:17.242 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.044) 0:00:17.287 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.047) 0:00:17.335 ********* \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.079) 0:00:17.414 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.043) 0:00:17.457 ********* \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.081) 0:00:17.539 ********* \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nMonday 20 August 2018 06:29:36 -0400 (0:00:00.075) 0:00:17.615 ********* \nok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.078) 0:00:17.693 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.050) 0:00:17.744 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.049) 0:00:17.793 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.045) 0:00:17.839 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.045) 0:00:17.885 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.044) 0:00:17.929 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.046) 0:00:17.976 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.047) 0:00:18.024 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : get current cluster status (if already running)] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:219\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.166) 0:00:18.190 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:223\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.122) 0:00:18.312 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:227\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.044) 0:00:18.357 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:237\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.052) 0:00:18.410 ********* \nok: [controller-0] => {\"ansible_facts\": {\"rgw_hostname\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.070) 0:00:18.480 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nMonday 20 August 2018 06:29:37 -0400 (0:00:00.069) 0:00:18.550 ********* \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nMonday 20 August 2018 06:29:39 -0400 (0:00:02.006) 0:00:20.556 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nMonday 20 August 2018 06:29:39 -0400 (0:00:00.047) 0:00:20.604 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nMonday 20 August 2018 06:29:40 -0400 (0:00:00.056) 0:00:20.660 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nMonday 20 August 2018 06:29:40 -0400 (0:00:00.044) 0:00:20.705 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nMonday 20 August 2018 06:29:40 -0400 (0:00:00.045) 0:00:20.750 ********* \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nMonday 20 August 2018 06:29:40 -0400 (0:00:00.382) 0:00:21.133 ********* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nMonday 20 August 2018 06:29:40 -0400 (0:00:00.088) 0:00:21.221 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nMonday 20 August 2018 06:29:40 -0400 (0:00:00.041) 0:00:21.263 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.020137\", \"end\": \"2018-08-20 10:29:40.795882\", \"rc\": 0, \"start\": \"2018-08-20 10:29:40.775745\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nMonday 20 August 2018 06:29:40 -0400 (0:00:00.233) 0:00:21.497 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nMonday 20 August 2018 06:29:40 -0400 (0:00:00.075) 0:00:21.572 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.022275\", \"end\": \"2018-08-20 10:29:41.105471\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:29:41.083196\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nMonday 20 August 2018 06:29:41 -0400 (0:00:00.233) 0:00:21.806 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nMonday 20 August 2018 06:29:41 -0400 (0:00:00.098) 0:00:21.904 ********* \nok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nMonday 20 August 2018 06:29:41 -0400 (0:00:00.133) 0:00:22.037 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nMonday 20 August 2018 06:29:41 -0400 (0:00:00.084) 0:00:22.122 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nMonday 20 August 2018 06:29:41 -0400 (0:00:00.108) 0:00:22.231 ********* \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nMonday 20 August 2018 06:29:42 -0400 (0:00:01.170) 0:00:23.401 ********* \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.269) 0:00:23.671 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.042) 0:00:23.713 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.040) 0:00:23.754 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.046) 0:00:23.801 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.055) 0:00:23.857 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.048) 0:00:23.905 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.046) 0:00:23.951 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.052) 0:00:24.004 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.058) 0:00:24.063 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.058) 0:00:24.121 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.047) 0:00:24.169 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.043) 0:00:24.213 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.044) 0:00:24.257 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.051) 0:00:24.308 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.056) 0:00:24.365 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.046) 0:00:24.412 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.048) 0:00:24.461 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.044) 0:00:24.505 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.044) 0:00:24.549 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nMonday 20 August 2018 06:29:43 -0400 (0:00:00.054) 0:00:24.603 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.059) 0:00:24.663 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.047) 0:00:24.711 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.047) 0:00:24.758 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.050) 0:00:24.809 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.048) 0:00:24.857 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.058) 0:00:24.916 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.049) 0:00:24.965 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.049) 0:00:25.015 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.053) 0:00:25.068 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-11 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nMonday 20 August 2018 06:29:44 -0400 (0:00:00.052) 0:00:25.121 ********* \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:13.452881\", \"end\": \"2018-08-20 10:29:58.205484\", \"rc\": 0, \"start\": \"2018-08-20 10:29:44.752603\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-11: Pulling from 192.168.24.1:8787/rhceph\\nd02c3bd49e78: Pulling fs layer\\n475b0168c252: Pulling fs layer\\n9cc28bc5e4f9: Pulling fs layer\\n475b0168c252: Download complete\\nd02c3bd49e78: Download complete\\n9cc28bc5e4f9: Verifying Checksum\\n9cc28bc5e4f9: Download complete\\nd02c3bd49e78: Pull complete\\n475b0168c252: Pull complete\\n9cc28bc5e4f9: Pull complete\\nDigest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-11\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-11: Pulling from 192.168.24.1:8787/rhceph\", \"d02c3bd49e78: Pulling fs layer\", \"475b0168c252: Pulling fs layer\", \"9cc28bc5e4f9: Pulling fs layer\", \"475b0168c252: Download complete\", \"d02c3bd49e78: Download complete\", \"9cc28bc5e4f9: Verifying Checksum\", \"9cc28bc5e4f9: Download complete\", \"d02c3bd49e78: Pull complete\", \"475b0168c252: Pull complete\", \"9cc28bc5e4f9: Pull complete\", \"Digest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-11\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-11 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nMonday 20 August 2018 06:29:58 -0400 (0:00:13.795) 0:00:38.917 ********* \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:00.024836\", \"end\": \"2018-08-20 10:29:58.583464\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:29:58.558628\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-11\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 616048717,\\n \\\"VirtualSize\\\": 616048717,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\\n \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\\n \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-11\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 616048717,\", \" \\\"VirtualSize\\\": 616048717,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\", \" \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\", \" \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nMonday 20 August 2018 06:29:58 -0400 (0:00:00.381) 0:00:39.299 ********* \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nMonday 20 August 2018 06:29:58 -0400 (0:00:00.209) 0:00:39.509 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nMonday 20 August 2018 06:29:58 -0400 (0:00:00.123) 0:00:39.633 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.050) 0:00:39.684 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.048) 0:00:39.732 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.049) 0:00:39.782 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.050) 0:00:39.833 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.044) 0:00:39.878 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.051) 0:00:39.930 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.043) 0:00:39.974 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.043) 0:00:40.017 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.045) 0:00:40.062 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nMonday 20 August 2018 06:29:59 -0400 (0:00:00.052) 0:00:40.114 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-11\", \"--version\"], \"delta\": \"0:00:00.465469\", \"end\": \"2018-08-20 10:30:00.105336\", \"rc\": 0, \"start\": \"2018-08-20 10:29:59.639867\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nMonday 20 August 2018 06:30:00 -0400 (0:00:00.699) 0:00:40.814 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-30.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nMonday 20 August 2018 06:30:00 -0400 (0:00:00.076) 0:00:40.890 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nMonday 20 August 2018 06:30:00 -0400 (0:00:00.050) 0:00:40.940 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nMonday 20 August 2018 06:30:00 -0400 (0:00:00.047) 0:00:40.988 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nMonday 20 August 2018 06:30:00 -0400 (0:00:00.083) 0:00:41.072 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nMonday 20 August 2018 06:30:00 -0400 (0:00:00.050) 0:00:41.122 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nMonday 20 August 2018 06:30:00 -0400 (0:00:00.047) 0:00:41.170 ********* \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nMonday 20 August 2018 06:30:01 -0400 (0:00:00.879) 0:00:42.050 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nMonday 20 August 2018 06:30:01 -0400 (0:00:00.055) 0:00:42.105 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nMonday 20 August 2018 06:30:01 -0400 (0:00:00.054) 0:00:42.160 ********* \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nMonday 20 August 2018 06:30:01 -0400 (0:00:00.252) 0:00:42.413 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nMonday 20 August 2018 06:30:01 -0400 (0:00:00.053) 0:00:42.467 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nMonday 20 August 2018 06:30:01 -0400 (0:00:00.048) 0:00:42.515 ********* \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nMonday 20 August 2018 06:30:02 -0400 (0:00:00.234) 0:00:42.749 ********* \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"ad274129acdf99bf79681112519249b5cd433cfc\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"d12c4a40219f2d53aebea240077fc57d\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761002.14-187001174316155/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nMonday 20 August 2018 06:30:04 -0400 (0:00:02.373) 0:00:45.122 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2\nMonday 20 August 2018 06:30:04 -0400 (0:00:00.064) 0:00:45.187 ********* \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2\nMonday 20 August 2018 06:30:04 -0400 (0:00:00.200) 0:00:45.388 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate monitor initial keyring] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2\nMonday 20 August 2018 06:30:04 -0400 (0:00:00.066) 0:00:45.454 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : read monitor initial keyring if it already exists] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11\nMonday 20 August 2018 06:30:04 -0400 (0:00:00.066) 0:00:45.521 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create monitor initial keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22\nMonday 20 August 2018 06:30:04 -0400 (0:00:00.051) 0:00:45.572 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set initial monitor key permissions] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34\nMonday 20 August 2018 06:30:04 -0400 (0:00:00.051) 0:00:45.624 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create (and fix ownership of) monitor directory] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.050) 0:00:45.675 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.061) 0:00:45.737 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.066) 0:00:45.803 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create custom admin keyring] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.261) 0:00:46.065 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set ownership of admin keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.060) 0:00:46.125 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : import admin keyring into mon keyring] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.050) 0:00:46.176 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs with keyring] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.050) 0:00:46.227 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ceph monitor mkfs without keyring] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.044) 0:00:46.272 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.052) 0:00:46.325 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add ceph-mon systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.049) 0:00:46.375 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : start the monitor service] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.056) 0:00:46.431 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : enable the ceph-mon.target service] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.061) 0:00:46.493 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : include ceph_keys.yml] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.063) 0:00:46.556 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : collect all the pools] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.049) 0:00:46.605 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : secure the cluster] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7\nMonday 20 August 2018 06:30:05 -0400 (0:00:00.048) 0:00:46.654 ********* \n\nTASK [ceph-mon : set_fact ceph_config_keys] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2\nMonday 20 August 2018 06:30:06 -0400 (0:00:00.064) 0:00:46.718 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : register rbd bootstrap key] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11\nMonday 20 August 2018 06:30:06 -0400 (0:00:00.086) 0:00:46.805 ********* \nok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17\nMonday 20 August 2018 06:30:06 -0400 (0:00:00.083) 0:00:46.888 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-mon : stat for ceph config and keys] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22\nMonday 20 August 2018 06:30:06 -0400 (0:00:00.091) 0:00:46.980 ********* \nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}\nok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}\n\nTASK [ceph-mon : try to copy ceph keys] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33\nMonday 20 August 2018 06:30:07 -0400 (0:00:00.983) 0:00:47.963 ********* \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with default ceph.conf] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2\nMonday 20 August 2018 06:30:07 -0400 (0:00:00.146) 0:00:48.110 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : populate kv_store with custom ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18\nMonday 20 August 2018 06:30:07 -0400 (0:00:00.052) 0:00:48.163 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : delete populate-kv-store docker] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36\nMonday 20 August 2018 06:30:07 -0400 (0:00:00.048) 0:00:48.211 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43\nMonday 20 August 2018 06:30:07 -0400 (0:00:00.047) 0:00:48.259 ********* \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"1fd7e13e28ace96222549265cb506432639d6b8b\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"2d20afce9a3de8ef54fb3f294f9f63d7\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 887, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761007.64-194225143353676/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : systemd start mon container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54\nMonday 20 August 2018 06:30:08 -0400 (0:00:00.865) 0:00:49.125 ********* \nchanged: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket basic.target system-ceph\\\\x5cx2dmon.slice docker.service\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --memory=3g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.14 -e CLUSTER=ceph -e FSID=00d03b50-a460-11e8-8cf1-525400721501 -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-11 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/bin/rm ; argv[]=/bin/rm -f /var/run/ceph/ceph-mon.controller-0.asok ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127799\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127799\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mon : configure ceph profile.d aliases] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2\nMonday 20 August 2018 06:30:09 -0400 (0:00:00.716) 0:00:49.841 ********* \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761009.23-195696500984284/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mon : wait for monitor socket to exist] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12\nMonday 20 August 2018 06:30:09 -0400 (0:00:00.536) 0:00:50.378 ********* \nFAILED - RETRYING: wait for monitor socket to exist (5 retries left).\nchanged: [controller-0] => {\"attempts\": 2, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.078752\", \"end\": \"2018-08-20 10:30:25.236221\", \"rc\": 0, \"start\": \"2018-08-20 10:30:25.157469\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 326382 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-08-20 10:30:10.124988857 +0000\\nModify: 2018-08-20 10:30:10.124988857 +0000\\nChange: 2018-08-20 10:30:10.124988857 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 326382 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-08-20 10:30:10.124988857 +0000\", \"Modify: 2018-08-20 10:30:10.124988857 +0000\", \"Change: 2018-08-20 10:30:10.124988857 +0000\", \" Birth: -\"]}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19\nMonday 20 August 2018 06:30:25 -0400 (0:00:15.562) 0:01:05.940 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29\nMonday 20 August 2018 06:30:25 -0400 (0:00:00.097) 0:01:06.038 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39\nMonday 20 August 2018 06:30:25 -0400 (0:00:00.088) 0:01:06.126 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.14\"], \"delta\": \"0:00:00.175474\", \"end\": \"2018-08-20 10:30:25.913019\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:25.737545\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49\nMonday 20 August 2018 06:30:25 -0400 (0:00:00.489) 0:01:06.615 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59\nMonday 20 August 2018 06:30:26 -0400 (0:00:00.053) 0:01:06.669 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69\nMonday 20 August 2018 06:30:26 -0400 (0:00:00.054) 0:01:06.723 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : push ceph files to the ansible server] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2\nMonday 20 August 2018 06:30:26 -0400 (0:00:00.051) 0:01:06.775 ********* \nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": true, \"checksum\": \"32793d89de7819833a3849e42af57849c578f1ee\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"f4b124585db38fc16abb99f1a1324648\", \"remote_checksum\": \"32793d89de7819833a3849e42af57849c578f1ee\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": true, \"checksum\": \"924bb9cec4772c247782ec43a790040656d3ab31\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"7ef3261179ff3f34a66f8517502d80f2\", \"remote_checksum\": \"924bb9cec4772c247782ec43a790040656d3ab31\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"e32a66ddc038f6331ba8cd3a3e75084e\", \"remote_checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"f361c1725afb0640dd7f85ed53589f84\", \"remote_checksum\": \"5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"0b81209fa4aacb4370dae6fcb06b8a43d48ed42d\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"59f9cbc63af613abd9891519100e0820\", \"remote_checksum\": \"0b81209fa4aacb4370dae6fcb06b8a43d48ed42d\", \"remote_md5sum\": null}\nchanged: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"963a0d4350677a12a72614a09b2996d236b0a6d6\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"b92b9cf2d12e29b7c950a7ba01356a77\", \"remote_checksum\": \"963a0d4350677a12a72614a09b2996d236b0a6d6\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84\nMonday 20 August 2018 06:30:27 -0400 (0:00:01.313) 0:01:08.088 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97\nMonday 20 August 2018 06:30:27 -0400 (0:00:00.057) 0:01:08.146 ********* \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.328956\", \"end\": \"2018-08-20 10:30:28.223470\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-08-20 10:30:27.894514\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-mon : stat for ceph mgr key(s)] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109\nMonday 20 August 2018 06:30:28 -0400 (0:00:00.782) 0:01:08.928 ********* \nok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1534761028.0999975, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"ctime\": 1534761028.2069974, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 77909386, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1534761028.2069974, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"2012420224\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-mon : fetch ceph mgr key(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121\nMonday 20 August 2018 06:30:28 -0400 (0:00:00.398) 0:01:09.327 ********* \nchanged: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'charset': u'us-ascii', u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761028.2069974, u'block_size': 4096, u'inode': 77909386, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': u'2012420224', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1534761028.0999975, u'mimetype': u'text/plain', u'ctime': 1534761028.2069974, u'isblk': False, u'checksum': u'557a22485a6e0bcdb875a5f5926bdb3409555b7d', u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, 'failed': False, u'changed': False, 'item': u'controller-0', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'controller-0'}) => {\"changed\": true, \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1534761028.0999975, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"ctime\": 1534761028.2069974, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 77909386, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1534761028.2069974, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"2012420224\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"82352f6d3d5aac744c3838aa345e1f7c\", \"remote_checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"remote_md5sum\": null}\n\nTASK [ceph-mon : configure crush hierarchy] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.417) 0:01:09.744 ********* \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : create configured crush rules] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.065) 0:01:09.809 ********* \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get id for new default crush rule] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.074) 0:01:09.883 ********* \nskipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.077) 0:01:09.961 ********* \nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.073) 0:01:10.034 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.160) 0:01:10.195 ********* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : add new default crush rule to ceph.conf] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.080) 0:01:10.275 ********* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.054) 0:01:10.329 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.049) 0:01:10.378 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.051) 0:01:10.430 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.045) 0:01:10.475 ********* \nok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}\n\nTASK [ceph-mon : test if calamari-server is installed] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:2\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.076) 0:01:10.552 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : increase calamari logging level when debug is on] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:18\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.044) 0:01:10.597 ********* \nskipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mon : initialize the calamari server api] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:29\nMonday 20 August 2018 06:30:29 -0400 (0:00:00.053) 0:01:10.651 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nMonday 20 August 2018 06:30:30 -0400 (0:00:00.017) 0:01:10.668 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nMonday 20 August 2018 06:30:30 -0400 (0:00:00.072) 0:01:10.741 ********* \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"83f7af8323e264039a95f266faedb4a665c8f4ca\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a72fe8d7f7ff92960aa2e96a1b3fe152\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 1398, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761030.16-39284618963640/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nMonday 20 August 2018 06:30:30 -0400 (0:00:00.519) 0:01:11.260 ********* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nMonday 20 August 2018 06:30:30 -0400 (0:00:00.094) 0:01:11.355 ********* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nMonday 20 August 2018 06:30:30 -0400 (0:00:00.143) 0:01:11.499 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nMonday 20 August 2018 06:30:30 -0400 (0:00:00.071) 0:01:11.570 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nMonday 20 August 2018 06:30:30 -0400 (0:00:00.069) 0:01:11.640 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.051) 0:01:11.691 ********* \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.089) 0:01:11.780 ********* \nskipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.081) 0:01:11.862 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.072) 0:01:11.934 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.071) 0:01:12.005 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.043) 0:01:12.048 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.050) 0:01:12.099 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.055) 0:01:12.154 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.071) 0:01:12.226 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.065) 0:01:12.292 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.044) 0:01:12.337 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.052) 0:01:12.389 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.053) 0:01:12.443 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.072) 0:01:12.516 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.075) 0:01:12.591 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nMonday 20 August 2018 06:30:31 -0400 (0:00:00.049) 0:01:12.640 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nMonday 20 August 2018 06:30:32 -0400 (0:00:00.059) 0:01:12.700 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nMonday 20 August 2018 06:30:32 -0400 (0:00:00.056) 0:01:12.757 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nMonday 20 August 2018 06:30:32 -0400 (0:00:00.076) 0:01:12.833 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nMonday 20 August 2018 06:30:32 -0400 (0:00:00.076) 0:01:12.909 ********* \nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"73c8d33ad2b3c95d77ee4b411e06cae6\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761032.33-165568647489462/source\", \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nMonday 20 August 2018 06:30:32 -0400 (0:00:00.499) 0:01:13.409 ********* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nMonday 20 August 2018 06:30:32 -0400 (0:00:00.086) 0:01:13.495 ********* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nMonday 20 August 2018 06:30:32 -0400 (0:00:00.127) 0:01:13.623 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mons] ********************************************************************\nMETA: ran handlers\n\nTASK [set ceph monitor install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:98\nMonday 20 August 2018 06:30:33 -0400 (0:00:00.108) 0:01:13.731 ********* \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20180820063033Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\nMETA: ran handlers\n\nPLAY [mgrs] ********************************************************************\n\nTASK [set ceph manager install 'In Progress'] **********************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:110\nMonday 20 August 2018 06:30:33 -0400 (0:00:00.154) 0:01:13.886 ********* \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20180820063033Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nMonday 20 August 2018 06:30:33 -0400 (0:00:00.095) 0:01:13.982 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.026595\", \"end\": \"2018-08-20 10:30:33.537723\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:33.511128\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"e09152a9bfe0\", \"stdout_lines\": [\"e09152a9bfe0\"]}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nMonday 20 August 2018 06:30:33 -0400 (0:00:00.259) 0:01:14.242 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nMonday 20 August 2018 06:30:33 -0400 (0:00:00.049) 0:01:14.291 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nMonday 20 August 2018 06:30:33 -0400 (0:00:00.051) 0:01:14.342 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nMonday 20 August 2018 06:30:33 -0400 (0:00:00.049) 0:01:14.392 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.023186\", \"end\": \"2018-08-20 10:30:34.048709\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:34.025523\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.361) 0:01:14.753 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.048) 0:01:14.802 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.047) 0:01:14.850 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.045) 0:01:14.895 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.051) 0:01:14.946 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.192) 0:01:15.139 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.055) 0:01:15.195 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.050) 0:01:15.246 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.051) 0:01:15.297 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.343 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.390 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.045) 0:01:15.436 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.049) 0:01:15.486 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.533 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.579 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nMonday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.625 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.060) 0:01:15.686 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.051) 0:01:15.737 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.048) 0:01:15.785 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.048) 0:01:15.833 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.045) 0:01:15.879 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.049) 0:01:15.928 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.050) 0:01:15.978 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.048) 0:01:16.026 ********* \nok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.224) 0:01:16.251 ********* \nok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.085) 0:01:16.337 ********* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.091) 0:01:16.429 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nMonday 20 August 2018 06:30:35 -0400 (0:00:00.077) 0:01:16.506 ********* \nok: [controller-0 -> 192.168.24.12] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nMonday 20 August 2018 06:30:36 -0400 (0:00:00.161) 0:01:16.668 ********* \nok: [controller-0 -> 192.168.24.12] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.366133\", \"end\": \"2018-08-20 10:30:36.583141\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:36.217008\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"00d03b50-a460-11e8-8cf1-525400721501\", \"stdout_lines\": [\"00d03b50-a460-11e8-8cf1-525400721501\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nMonday 20 August 2018 06:30:36 -0400 (0:00:00.626) 0:01:17.295 ********* \nok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nMonday 20 August 2018 06:30:36 -0400 (0:00:00.187) 0:01:17.483 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nMonday 20 August 2018 06:30:36 -0400 (0:00:00.050) 0:01:17.533 ********* \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 50, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nMonday 20 August 2018 06:30:37 -0400 (0:00:00.177) 0:01:17.711 ********* \nok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"00d03b50-a460-11e8-8cf1-525400721501\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nMonday 20 August 2018 06:30:37 -0400 (0:00:00.080) 0:01:17.791 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nMonday 20 August 2018 06:30:37 -0400 (0:00:00.079) 0:01:17.871 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nMonday 20 August 2018 06:30:37 -0400 (0:00:00.045) 0:01:17.917 ********* \nchanged: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 00d03b50-a460-11e8-8cf1-525400721501 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.581384\", \"end\": \"2018-08-20 06:30:37.985514\", \"rc\": 0, \"start\": \"2018-08-20 06:30:37.404130\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"00d03b50-a460-11e8-8cf1-525400721501\", \"stdout_lines\": [\"00d03b50-a460-11e8-8cf1-525400721501\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.781) 0:01:18.698 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.052) 0:01:18.750 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.047) 0:01:18.798 ********* \nok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.085) 0:01:18.883 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.043) 0:01:18.927 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.053) 0:01:18.980 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.048) 0:01:19.029 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.055) 0:01:19.084 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.049) 0:01:19.134 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.049) 0:01:19.184 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.047) 0:01:19.231 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.042) 0:01:19.274 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.048) 0:01:19.322 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.049) 0:01:19.372 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.050) 0:01:19.422 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : get current cluster status (if already running)] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:219\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.075) 0:01:19.498 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:223\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.053) 0:01:19.552 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:227\nMonday 20 August 2018 06:30:38 -0400 (0:00:00.051) 0:01:19.603 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:237\nMonday 20 August 2018 06:30:39 -0400 (0:00:00.062) 0:01:19.666 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nMonday 20 August 2018 06:30:39 -0400 (0:00:00.049) 0:01:19.716 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nMonday 20 August 2018 06:30:39 -0400 (0:00:00.224) 0:01:19.940 ********* \nok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}\nok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nMonday 20 August 2018 06:30:41 -0400 (0:00:02.125) 0:01:22.066 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nMonday 20 August 2018 06:30:41 -0400 (0:00:00.050) 0:01:22.116 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nMonday 20 August 2018 06:30:41 -0400 (0:00:00.061) 0:01:22.178 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nMonday 20 August 2018 06:30:41 -0400 (0:00:00.053) 0:01:22.232 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nMonday 20 August 2018 06:30:41 -0400 (0:00:00.047) 0:01:22.280 ********* \nok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.383) 0:01:22.664 ********* \nok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.083) 0:01:22.748 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.045) 0:01:22.793 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.022056\", \"end\": \"2018-08-20 10:30:42.346582\", \"rc\": 0, \"start\": \"2018-08-20 10:30:42.324526\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.257) 0:01:23.050 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.080) 0:01:23.131 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.022201\", \"end\": \"2018-08-20 10:30:42.678900\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:42.656699\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"e09152a9bfe0\", \"stdout_lines\": [\"e09152a9bfe0\"]}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.253) 0:01:23.384 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.058) 0:01:23.442 ********* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.063) 0:01:23.505 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.054) 0:01:23.560 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nMonday 20 August 2018 06:30:42 -0400 (0:00:00.068) 0:01:23.629 ********* \nskipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.121) 0:01:23.751 ********* \nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.130) 0:01:23.882 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.049) 0:01:23.931 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.048) 0:01:23.980 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.052) 0:01:24.032 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.055) 0:01:24.087 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.053) 0:01:24.141 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.047) 0:01:24.188 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.045) 0:01:24.234 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.050) 0:01:24.284 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"e09152a9bfe0\"], \"delta\": \"0:00:00.021511\", \"end\": \"2018-08-20 10:30:43.851369\", \"rc\": 0, \"start\": \"2018-08-20 10:30:43.829858\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460\\\",\\n \\\"Created\\\": \\\"2018-08-20T10:30:09.142796973Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 44599,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-08-20T10:30:09.30892439Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\",\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 3221225472,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 6442450944,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9-init/diff:/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff:/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.14\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=00d03b50-a460-11e8-8cf1-525400721501\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-11\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"0278a3b0888e406c316ccc3b14210d2a79ce05281a17141a4255a1dcb51f4d88\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"77a481d4bde7dbe2de1254b4f8439d7ef986772190569fe08b0d4650df1853b3\\\",\\n \\\"EndpointID\\\": \\\"50d85912eb42c78e21a711c557cef5ab5974dde83cbe2a0ea94a839b52e6367b\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460\\\",\", \" \\\"Created\\\": \\\"2018-08-20T10:30:09.142796973Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 44599,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-08-20T10:30:09.30892439Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\",\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 3221225472,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 6442450944,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9-init/diff:/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff:/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.14\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=00d03b50-a460-11e8-8cf1-525400721501\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-11\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"0278a3b0888e406c316ccc3b14210d2a79ce05281a17141a4255a1dcb51f4d88\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"77a481d4bde7dbe2de1254b4f8439d7ef986772190569fe08b0d4650df1853b3\\\",\", \" \\\"EndpointID\\\": \\\"50d85912eb42c78e21a711c557cef5ab5974dde83cbe2a0ea94a839b52e6367b\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.290) 0:01:24.575 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nMonday 20 August 2018 06:30:43 -0400 (0:00:00.053) 0:01:24.628 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.053) 0:01:24.682 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.054) 0:01:24.737 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.077) 0:01:24.815 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.055) 0:01:24.870 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.049) 0:01:24.920 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\"], \"delta\": \"0:00:00.027415\", \"end\": \"2018-08-20 10:30:44.478818\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:44.451403\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-11\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 616048717,\\n \\\"VirtualSize\\\": 616048717,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\\n \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\\n \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-11\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 616048717,\", \" \\\"VirtualSize\\\": 616048717,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\", \" \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\", \" \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.282) 0:01:25.203 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.049) 0:01:25.252 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.052) 0:01:25.304 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.048) 0:01:25.353 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.052) 0:01:25.406 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.047) 0:01:25.453 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.049) 0:01:25.502 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.087) 0:01:25.590 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nMonday 20 August 2018 06:30:44 -0400 (0:00:00.049) 0:01:25.639 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.047) 0:01:25.687 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.050) 0:01:25.738 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.048) 0:01:25.786 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.048) 0:01:25.835 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-11 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.049) 0:01:25.884 ********* \nok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:00.035260\", \"end\": \"2018-08-20 10:30:45.441486\", \"rc\": 0, \"start\": \"2018-08-20 10:30:45.406226\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-11: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-11\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-11: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-11\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-11 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.265) 0:01:26.150 ********* \nchanged: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:00.024950\", \"end\": \"2018-08-20 10:30:45.717487\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:45.692537\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-11\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 616048717,\\n \\\"VirtualSize\\\": 616048717,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\\n \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\\n \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-11\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 616048717,\", \" \\\"VirtualSize\\\": 616048717,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\", \" \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\", \" \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.285) 0:01:26.435 ********* \nok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.077) 0:01:26.512 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.053) 0:01:26.566 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nMonday 20 August 2018 06:30:45 -0400 (0:00:00.045) 0:01:26.611 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nMonday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:26.657 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nMonday 20 August 2018 06:30:46 -0400 (0:00:00.044) 0:01:26.701 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nMonday 20 August 2018 06:30:46 -0400 (0:00:00.048) 0:01:26.750 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nMonday 20 August 2018 06:30:46 -0400 (0:00:00.044) 0:01:26.795 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nMonday 20 August 2018 06:30:46 -0400 (0:00:00.059) 0:01:26.854 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nMonday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:26.900 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nMonday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:26.946 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nMonday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:26.991 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nMonday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:27.036 ********* \nok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-11\", \"--version\"], \"delta\": \"0:00:00.450550\", \"end\": \"2018-08-20 10:30:47.118123\", \"rc\": 0, \"start\": \"2018-08-20 10:30:46.667573\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nMonday 20 August 2018 06:30:47 -0400 (0:00:00.784) 0:01:27.821 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-30.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nMonday 20 August 2018 06:30:47 -0400 (0:00:00.203) 0:01:28.024 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nMonday 20 August 2018 06:30:47 -0400 (0:00:00.051) 0:01:28.076 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nMonday 20 August 2018 06:30:47 -0400 (0:00:00.050) 0:01:28.126 ********* \nok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nMonday 20 August 2018 06:30:47 -0400 (0:00:00.183) 0:01:28.309 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nMonday 20 August 2018 06:30:47 -0400 (0:00:00.146) 0:01:28.456 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nMonday 20 August 2018 06:30:47 -0400 (0:00:00.043) 0:01:28.499 ********* \nchanged: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\nchanged: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nMonday 20 August 2018 06:30:48 -0400 (0:00:00.849) 0:01:29.349 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nMonday 20 August 2018 06:30:48 -0400 (0:00:00.065) 0:01:29.415 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nMonday 20 August 2018 06:30:48 -0400 (0:00:00.059) 0:01:29.474 ********* \nok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nMonday 20 August 2018 06:30:49 -0400 (0:00:00.215) 0:01:29.690 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nMonday 20 August 2018 06:30:49 -0400 (0:00:00.057) 0:01:29.747 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nMonday 20 August 2018 06:30:49 -0400 (0:00:00.050) 0:01:29.798 ********* \nchanged: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nMonday 20 August 2018 06:30:49 -0400 (0:00:00.252) 0:01:30.051 ********* \nok: [controller-0] => {\"changed\": false, \"checksum\": \"ad274129acdf99bf79681112519249b5cd433cfc\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"d12c4a40219f2d53aebea240077fc57d\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761049.45-192564754238695/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nMonday 20 August 2018 06:30:49 -0400 (0:00:00.564) 0:01:30.615 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set_fact docker_exec_cmd] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2\nMonday 20 August 2018 06:30:50 -0400 (0:00:00.062) 0:01:30.677 ********* \nok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-mgr : create mgr directory] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2\nMonday 20 August 2018 06:30:50 -0400 (0:00:00.117) 0:01:30.794 ********* \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10\nMonday 20 August 2018 06:30:50 -0400 (0:00:00.244) 0:01:31.039 ********* \nchanged: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"82352f6d3d5aac744c3838aa345e1f7c\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761050.44-264880339939330/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : set mgr key permissions] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24\nMonday 20 August 2018 06:30:50 -0400 (0:00:00.554) 0:01:31.593 ********* \nok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}\n\nTASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2\nMonday 20 August 2018 06:30:51 -0400 (0:00:00.244) 0:01:31.838 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : install ceph mgr for debian] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9\nMonday 20 August 2018 06:30:51 -0400 (0:00:00.061) 0:01:31.899 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : ensure systemd service override directory exists] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17\nMonday 20 August 2018 06:30:51 -0400 (0:00:00.073) 0:01:31.973 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25\nMonday 20 August 2018 06:30:51 -0400 (0:00:00.057) 0:01:32.031 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : start and add that the mgr service to the init sequence] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35\nMonday 20 August 2018 06:30:51 -0400 (0:00:00.086) 0:01:32.118 ********* \nskipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2\nMonday 20 August 2018 06:30:51 -0400 (0:00:00.053) 0:01:32.171 ********* \nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0\nchanged: [controller-0] => {\"changed\": true, \"checksum\": \"999f6cead45dab5c24bf2b8115beaf5b3c3389b5\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"887c4695cb992b04476c1f085621325e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 734, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761051.57-35019727649894/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-mgr : systemd start mgr container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13\nMonday 20 August 2018 06:30:52 -0400 (0:00:00.824) 0:01:32.995 ********* \nchanged: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmgr.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-11 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127799\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127799\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19\nMonday 20 August 2018 06:30:52 -0400 (0:00:00.525) 0:01:33.521 ********* \nchanged: [controller-0 -> 192.168.24.12] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.345559\", \"end\": \"2018-08-20 10:30:53.434508\", \"rc\": 0, \"start\": \"2018-08-20 10:30:53.088949\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\"]}\n\nTASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26\nMonday 20 August 2018 06:30:53 -0400 (0:00:00.617) 0:01:34.139 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"], \"enabled_modules\": [\"balancer\", \"restful\", \"status\"]}}, \"changed\": false}\n\nTASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:32\nMonday 20 August 2018 06:30:53 -0400 (0:00:00.086) 0:01:34.226 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_disabled_ceph_mgr_modules\": \"[Undefined, Undefined, Undefined, Undefined, Undefined, Undefined]\"}, \"changed\": false}\n\nTASK [ceph-mgr : disable ceph mgr enabled modules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:38\nMonday 20 August 2018 06:30:53 -0400 (0:00:00.108) 0:01:34.334 ********* \nchanged: [controller-0 -> 192.168.24.12] => (item=balancer) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"balancer\"], \"delta\": \"0:00:01.324554\", \"end\": \"2018-08-20 10:30:55.205683\", \"item\": \"balancer\", \"rc\": 0, \"start\": \"2018-08-20 10:30:53.881129\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [controller-0 -> 192.168.24.12] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:00.832155\", \"end\": \"2018-08-20 10:30:56.208878\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-08-20 10:30:55.376723\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-mgr : add modules to ceph-mgr] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:49\nMonday 20 August 2018 06:30:56 -0400 (0:00:02.621) 0:01:36.955 ********* \nskipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nMonday 20 August 2018 06:30:56 -0400 (0:00:00.035) 0:01:36.991 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}\n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nMonday 20 August 2018 06:30:56 -0400 (0:00:00.186) 0:01:37.177 ********* \nok: [controller-0] => {\"changed\": false, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"state\": \"file\", \"uid\": 0}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nMonday 20 August 2018 06:30:57 -0400 (0:00:00.569) 0:01:37.747 ********* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nMonday 20 August 2018 06:30:57 -0400 (0:00:00.086) 0:01:37.833 ********* \nskipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}\n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nMonday 20 August 2018 06:30:57 -0400 (0:00:00.128) 0:01:37.962 ********* \nok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [set ceph manager install 'Complete'] *************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:129\nMonday 20 August 2018 06:30:57 -0400 (0:00:00.208) 0:01:38.170 ********* \nok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20180820063057Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nPLAY [osds] ********************************************************************\n\nTASK [set ceph osd install 'In Progress'] **************************************\ntask path: /usr/share/ceph-ansible/site-docker.yml.sample:141\nMonday 20 August 2018 06:30:57 -0400 (0:00:00.331) 0:01:38.501 ********* \nok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20180820063057Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}\nMETA: ran handlers\n\nTASK [ceph-defaults : check for a mon container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2\nMonday 20 August 2018 06:30:57 -0400 (0:00:00.094) 0:01:38.596 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for an osd container] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11\nMonday 20 August 2018 06:30:57 -0400 (0:00:00.051) 0:01:38.648 ********* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.030774\", \"end\": \"2018-08-20 10:30:58.213214\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:58.182440\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-defaults : check for a mds container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.271) 0:01:38.919 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rgw container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.051) 0:01:38.971 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a mgr container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.051) 0:01:39.022 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a rbd mirror container] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.047) 0:01:39.070 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a nfs container] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.048) 0:01:39.118 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mon socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.050) 0:01:39.169 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mon socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.051) 0:01:39.220 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.040) 0:01:39.260 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph osd socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.040) 0:01:39.300 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph osd socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.040) 0:01:39.341 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.040) 0:01:39.381 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mds socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.045) 0:01:39.426 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mds socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.042) 0:01:39.469 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.041) 0:01:39.510 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rgw socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.039) 0:01:39.550 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.041) 0:01:39.592 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108\nMonday 20 August 2018 06:30:58 -0400 (0:00:00.039) 0:01:39.631 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph mgr socket] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.043) 0:01:39.675 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.048) 0:01:39.723 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.048) 0:01:39.772 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph rbd mirror socket] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.083) 0:01:39.856 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.070) 0:01:39.927 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.053) 0:01:39.980 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.047) 0:01:40.028 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.062) 0:01:40.090 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.054) 0:01:40.145 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : check if it is atomic host] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.056) 0:01:40.202 ********* \nok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact is_atomic] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.199) 0:01:40.401 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.079) 0:01:40.480 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.082) 0:01:40.563 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact docker_exec_cmd] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23\nMonday 20 August 2018 06:30:59 -0400 (0:00:00.079) 0:01:40.643 ********* \nok: [ceph-0 -> 192.168.24.12] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : is ceph running already?] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34\nMonday 20 August 2018 06:31:00 -0400 (0:00:00.141) 0:01:40.784 ********* \nok: [ceph-0 -> 192.168.24.12] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.389777\", \"end\": \"2018-08-20 10:31:00.737159\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:31:00.347382\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"00d03b50-a460-11e8-8cf1-525400721501\", \"stdout_lines\": [\"00d03b50-a460-11e8-8cf1-525400721501\"]}\n\nTASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47\nMonday 20 August 2018 06:31:00 -0400 (0:00:00.668) 0:01:41.453 ********* \nok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}\n\nTASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57\nMonday 20 August 2018 06:31:00 -0400 (0:00:00.173) 0:01:41.626 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : create a local fetch directory if it does not exist] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.049) 0:01:41.676 ********* \nok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}\n\nTASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.189) 0:01:41.865 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"00d03b50-a460-11e8-8cf1-525400721501\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.067) 0:01:41.932 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}\n\nTASK [ceph-defaults : generate cluster fsid] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.084) 0:01:42.016 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.044) 0:01:42.061 ********* \nok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 00d03b50-a460-11e8-8cf1-525400721501 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}\n\nTASK [ceph-defaults : read cluster fsid if it already exists] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.183) 0:01:42.244 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact fsid] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.043) 0:01:42.288 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.044) 0:01:42.333 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.086) 0:01:42.419 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.049) 0:01:42.469 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.070) 0:01:42.540 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149\nMonday 20 August 2018 06:31:01 -0400 (0:00:00.072) 0:01:42.613 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}\n\nTASK [ceph-defaults : resolve device link(s)] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156\nMonday 20 August 2018 06:31:02 -0400 (0:00:00.075) 0:01:42.689 ********* \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002677\", \"end\": \"2018-08-20 10:31:02.227220\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.224543\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}\nok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.002381\", \"end\": \"2018-08-20 10:31:02.387310\", \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.384929\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}\nok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.002341\", \"end\": \"2018-08-20 10:31:02.537575\", \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.535234\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}\nok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.002424\", \"end\": \"2018-08-20 10:31:02.686784\", \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.684360\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}\nok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.002270\", \"end\": \"2018-08-20 10:31:02.829823\", \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.827553\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}\n\nTASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166\nMonday 20 August 2018 06:31:02 -0400 (0:00:00.838) 0:01:43.527 ********* \nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.227220', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.002677', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-08-20 10:31:02.224543', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002677\", \"end\": \"2018-08-20 10:31:02.227220\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.224543\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.387310', '_ansible_no_log': False, u'stdout': u'/dev/vdc', u'cmd': [u'readlink', u'-f', u'/dev/vdc'], u'rc': 0, 'item': u'/dev/vdc', u'delta': u'0:00:00.002381', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdc', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdc'], u'start': u'2018-08-20 10:31:02.384929', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.002381\", \"end\": \"2018-08-20 10:31:02.387310\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.384929\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.537575', '_ansible_no_log': False, u'stdout': u'/dev/vdd', u'cmd': [u'readlink', u'-f', u'/dev/vdd'], u'rc': 0, 'item': u'/dev/vdd', u'delta': u'0:00:00.002341', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdd', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdd'], u'start': u'2018-08-20 10:31:02.535234', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.002341\", \"end\": \"2018-08-20 10:31:02.537575\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.535234\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.686784', '_ansible_no_log': False, u'stdout': u'/dev/vde', u'cmd': [u'readlink', u'-f', u'/dev/vde'], u'rc': 0, 'item': u'/dev/vde', u'delta': u'0:00:00.002424', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vde', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vde'], u'start': u'2018-08-20 10:31:02.684360', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.002424\", \"end\": \"2018-08-20 10:31:02.686784\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.684360\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}}\nok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.829823', '_ansible_no_log': False, u'stdout': u'/dev/vdf', u'cmd': [u'readlink', u'-f', u'/dev/vdf'], u'rc': 0, 'item': u'/dev/vdf', u'delta': u'0:00:00.002270', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdf', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdf'], u'start': u'2018-08-20 10:31:02.827553', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.002270\", \"end\": \"2018-08-20 10:31:02.829823\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.827553\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}}\n\nTASK [ceph-defaults : set_fact build final devices list] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.273) 0:01:43.800 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.213) 0:01:44.014 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.045) 0:01:44.059 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.048) 0:01:44.108 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.050) 0:01:44.158 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.053) 0:01:44.211 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}\n\nTASK [ceph-defaults : get current cluster status (if already running)] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:219\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.185) 0:01:44.397 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:223\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.048) 0:01:44.445 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:227\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.046) 0:01:44.491 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-defaults : set_fact rgw_hostname] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:237\nMonday 20 August 2018 06:31:03 -0400 (0:00:00.045) 0:01:44.537 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"rgw_hostname\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-defaults : set_fact ceph_directories] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2\nMonday 20 August 2018 06:31:04 -0400 (0:00:00.176) 0:01:44.713 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}\n\nTASK [ceph-defaults : create ceph initial directories] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18\nMonday 20 August 2018 06:31:04 -0400 (0:00:00.178) 0:01:44.892 ********* \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nchanged: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-docker-common : fail if systemd is not present] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2\nMonday 20 August 2018 06:31:06 -0400 (0:00:01.907) 0:01:46.799 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2\nMonday 20 August 2018 06:31:06 -0400 (0:00:00.051) 0:01:46.850 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11\nMonday 20 August 2018 06:31:06 -0400 (0:00:00.048) 0:01:46.899 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : warning deprecation for fqdn configuration] *********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20\nMonday 20 August 2018 06:31:06 -0400 (0:00:00.048) 0:01:46.948 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove ceph udev rules] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2\nMonday 20 August 2018 06:31:06 -0400 (0:00:00.042) 0:01:46.991 ********* \nok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}\nok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14\nMonday 20 August 2018 06:31:06 -0400 (0:00:00.391) 0:01:47.382 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20\nMonday 20 August 2018 06:31:06 -0400 (0:00:00.076) 0:01:47.459 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get docker version] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26\nMonday 20 August 2018 06:31:06 -0400 (0:00:00.042) 0:01:47.501 ********* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.019686\", \"end\": \"2018-08-20 10:31:07.027927\", \"rc\": 0, \"start\": \"2018-08-20 10:31:07.008241\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}\n\nTASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32\nMonday 20 August 2018 06:31:07 -0400 (0:00:00.226) 0:01:47.727 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}\n\nTASK [ceph-docker-common : check if a cluster is already running] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42\nMonday 20 August 2018 06:31:07 -0400 (0:00:00.075) 0:01:47.803 ********* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.018754\", \"end\": \"2018-08-20 10:31:07.335491\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:31:07.316737\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2\nMonday 20 August 2018 06:31:07 -0400 (0:00:00.229) 0:01:48.032 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13\nMonday 20 August 2018 06:31:07 -0400 (0:00:00.088) 0:01:48.120 ********* \nok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20\nMonday 20 August 2018 06:31:07 -0400 (0:00:00.132) 0:01:48.253 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25\nMonday 20 August 2018 06:31:07 -0400 (0:00:00.088) 0:01:48.342 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}\n\nTASK [ceph-docker-common : stat for ceph config and keys] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30\nMonday 20 August 2018 06:31:07 -0400 (0:00:00.090) 0:01:48.433 ********* \nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1534761026.5397818, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"32793d89de7819833a3849e42af57849c578f1ee\", \"ctime\": 1534761026.5397818, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464328, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.5397818, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1534761026.7087815, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"924bb9cec4772c247782ec43a790040656d3ab31\", \"ctime\": 1534761026.7077816, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464329, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.7077816, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1534761026.8687813, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"ctime\": 1534761026.8687813, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 26262221, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.8687813, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1534761027.030781, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd\", \"ctime\": 1534761027.030781, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30071520, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.030781, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1534761027.1997807, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"0b81209fa4aacb4370dae6fcb06b8a43d48ed42d\", \"ctime\": 1534761027.1987808, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 34251791, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.1987808, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1534761027.3837805, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"963a0d4350677a12a72614a09b2996d236b0a6d6\", \"ctime\": 1534761027.3837805, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 38394477, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.3837805, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\nok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1534761029.036778, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"ctime\": 1534761029.0357778, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464330, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761029.0357778, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}\n\nTASK [ceph-docker-common : fail if we find existing cluster files] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5\nMonday 20 August 2018 06:31:09 -0400 (0:00:01.314) 0:01:49.747 ********* \nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761026.5397818, u'block_size': 4096, u'inode': 9464328, u'isgid': False, u'size': 159, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1534761026.5397818, u'mimetype': u'unknown', u'ctime': 1534761026.5397818, u'isblk': False, u'checksum': u'32793d89de7819833a3849e42af57849c578f1ee', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1534761026.5397818, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"32793d89de7819833a3849e42af57849c578f1ee\", \"ctime\": 1534761026.5397818, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464328, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.5397818, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761026.7077816, u'block_size': 4096, u'inode': 9464329, u'isgid': False, u'size': 688, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1534761026.7087815, u'mimetype': u'unknown', u'ctime': 1534761026.7077816, u'isblk': False, u'checksum': u'924bb9cec4772c247782ec43a790040656d3ab31', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1534761026.7087815, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"924bb9cec4772c247782ec43a790040656d3ab31\", \"ctime\": 1534761026.7077816, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464329, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.7077816, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761026.8687813, u'block_size': 4096, u'inode': 26262221, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1534761026.8687813, u'mimetype': u'unknown', u'ctime': 1534761026.8687813, u'isblk': False, u'checksum': u'698d347fdbde95d7d515a3d48d03b13806292388', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1534761026.8687813, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"ctime\": 1534761026.8687813, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 26262221, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.8687813, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761027.030781, u'block_size': 4096, u'inode': 30071520, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1534761027.030781, u'mimetype': u'unknown', u'ctime': 1534761027.030781, u'isblk': False, u'checksum': u'5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1534761027.030781, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd\", \"ctime\": 1534761027.030781, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30071520, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.030781, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761027.1987808, u'block_size': 4096, u'inode': 34251791, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1534761027.1997807, u'mimetype': u'unknown', u'ctime': 1534761027.1987808, u'isblk': False, u'checksum': u'0b81209fa4aacb4370dae6fcb06b8a43d48ed42d', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1534761027.1997807, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"0b81209fa4aacb4370dae6fcb06b8a43d48ed42d\", \"ctime\": 1534761027.1987808, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 34251791, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.1987808, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761027.3837805, u'block_size': 4096, u'inode': 38394477, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1534761027.3837805, u'mimetype': u'unknown', u'ctime': 1534761027.3837805, u'isblk': False, u'checksum': u'963a0d4350677a12a72614a09b2996d236b0a6d6', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1534761027.3837805, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"963a0d4350677a12a72614a09b2996d236b0a6d6\", \"ctime\": 1534761027.3837805, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 38394477, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.3837805, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761029.0357778, u'block_size': 4096, u'inode': 9464330, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1534761029.036778, u'mimetype': u'unknown', u'ctime': 1534761029.0357778, u'isblk': False, u'checksum': u'557a22485a6e0bcdb875a5f5926bdb3409555b7d', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1534761029.036778, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"ctime\": 1534761029.0357778, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464330, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761029.0357778, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on atomic] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.318) 0:01:50.066 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.039) 0:01:50.106 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on redhat or suse] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.040) 0:01:50.146 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on redhat or suse] **********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.048) 0:01:50.195 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.046) 0:01:50.241 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : check ntp installation on debian] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.046) 0:01:50.288 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : install ntp on debian] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.053) 0:01:50.341 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : start the ntp service] ******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.046) 0:01:50.387 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mon container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.049) 0:01:50.436 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph osd container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.054) 0:01:50.491 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mds container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.054) 0:01:50.545 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rgw container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.050) 0:01:50.596 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph mgr container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39\nMonday 20 August 2018 06:31:09 -0400 (0:00:00.042) 0:01:50.639 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph rbd mirror container] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:50.685 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspect ceph nfs container] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.044) 0:01:50.729 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.055) 0:01:50.784 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.046) 0:01:50.831 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.046) 0:01:50.877 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.040) 0:01:50.918 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:50.963 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.042) 0:01:51.005 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.050) 0:01:51.055 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.044) 0:01:51.100 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:51.146 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.051) 0:01:51.198 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:51.243 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.044) 0:01:51.287 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.048) 0:01:51.336 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.050) 0:01:51.386 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-11 image] ********\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179\nMonday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:51.432 ********* \nok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:12.664455\", \"end\": \"2018-08-20 10:31:23.610575\", \"rc\": 0, \"start\": \"2018-08-20 10:31:10.946120\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-11: Pulling from 192.168.24.1:8787/rhceph\\nd02c3bd49e78: Pulling fs layer\\n475b0168c252: Pulling fs layer\\n9cc28bc5e4f9: Pulling fs layer\\n475b0168c252: Download complete\\nd02c3bd49e78: Verifying Checksum\\nd02c3bd49e78: Download complete\\n9cc28bc5e4f9: Verifying Checksum\\n9cc28bc5e4f9: Download complete\\nd02c3bd49e78: Pull complete\\n475b0168c252: Pull complete\\n9cc28bc5e4f9: Pull complete\\nDigest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-11\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-11: Pulling from 192.168.24.1:8787/rhceph\", \"d02c3bd49e78: Pulling fs layer\", \"475b0168c252: Pulling fs layer\", \"9cc28bc5e4f9: Pulling fs layer\", \"475b0168c252: Download complete\", \"d02c3bd49e78: Verifying Checksum\", \"d02c3bd49e78: Download complete\", \"9cc28bc5e4f9: Verifying Checksum\", \"9cc28bc5e4f9: Download complete\", \"d02c3bd49e78: Pull complete\", \"475b0168c252: Pull complete\", \"9cc28bc5e4f9: Pull complete\", \"Digest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-11\"]}\n\nTASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-11 image after pulling] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189\nMonday 20 August 2018 06:31:23 -0400 (0:00:12.883) 0:02:04.316 ********* \nchanged: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:00.022495\", \"end\": \"2018-08-20 10:31:23.851040\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:31:23.828545\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-11\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 616048717,\\n \\\"VirtualSize\\\": 616048717,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/18a142311a48efd57707657709b7e403db31f660db7f02e0cc514775dc4b6ac8/diff:/var/lib/docker/overlay2/724e96af25c6a782ebb1570fc169a5d43b3ee2e8bb616c568ba70d5106537d58/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\\n \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\\n \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-11\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 616048717,\", \" \\\"VirtualSize\\\": 616048717,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/18a142311a48efd57707657709b7e403db31f660db7f02e0cc514775dc4b6ac8/diff:/var/lib/docker/overlay2/724e96af25c6a782ebb1570fc169a5d43b3ee2e8bb616c568ba70d5106537d58/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\", \" \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\", \" \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\", \" ]\", \" }\", \" }\", \"]\"]}\n\nTASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194\nMonday 20 August 2018 06:31:23 -0400 (0:00:00.245) 0:02:04.561 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200\nMonday 20 August 2018 06:31:23 -0400 (0:00:00.086) 0:02:04.647 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.056) 0:02:04.704 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.049) 0:02:04.754 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.043) 0:02:04.797 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.044) 0:02:04.842 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.044) 0:02:04.887 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.044) 0:02:04.932 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : export local ceph dev image] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.051) 0:02:04.983 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : copy ceph dev image file] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.044) 0:02:05.028 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : load ceph dev image] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.045) 0:02:05.073 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : remove tmp ceph dev image file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.042) 0:02:05.116 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : get ceph version] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84\nMonday 20 August 2018 06:31:24 -0400 (0:00:00.045) 0:02:05.161 ********* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-11\", \"--version\"], \"delta\": \"0:00:00.433727\", \"end\": \"2018-08-20 10:31:25.105873\", \"rc\": 0, \"start\": \"2018-08-20 10:31:24.672146\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\"]}\n\nTASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90\nMonday 20 August 2018 06:31:25 -0400 (0:00:00.644) 0:02:05.806 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-30.el7cp\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release jewel] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2\nMonday 20 August 2018 06:31:25 -0400 (0:00:00.176) 0:02:05.983 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release kraken] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8\nMonday 20 August 2018 06:31:25 -0400 (0:00:00.043) 0:02:06.026 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release luminous] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14\nMonday 20 August 2018 06:31:25 -0400 (0:00:00.043) 0:02:06.070 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}\n\nTASK [ceph-docker-common : set_fact ceph_release mimic] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20\nMonday 20 August 2018 06:31:25 -0400 (0:00:00.069) 0:02:06.140 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : set_fact ceph_release nautilus] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26\nMonday 20 August 2018 06:31:25 -0400 (0:00:00.050) 0:02:06.190 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-docker-common : create bootstrap directories] ***********************\ntask path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2\nMonday 20 August 2018 06:31:25 -0400 (0:00:00.057) 0:02:06.248 ********* \nchanged: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}\n\nTASK [ceph-config : create ceph conf directory] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4\nMonday 20 August 2018 06:31:26 -0400 (0:00:00.950) 0:02:07.199 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate ceph configuration file: ceph.conf] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12\nMonday 20 August 2018 06:31:26 -0400 (0:00:00.043) 0:02:07.242 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : create a local fetch directory if it does not exist] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38\nMonday 20 August 2018 06:31:26 -0400 (0:00:00.045) 0:02:07.288 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : generate cluster uuid] *************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54\nMonday 20 August 2018 06:31:26 -0400 (0:00:00.053) 0:02:07.341 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : read cluster uuid if it already exists] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64\nMonday 20 August 2018 06:31:26 -0400 (0:00:00.045) 0:02:07.387 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nMonday 20 August 2018 06:31:26 -0400 (0:00:00.042) 0:02:07.429 ********* \nchanged: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nMonday 20 August 2018 06:31:27 -0400 (0:00:00.314) 0:02:07.744 ********* \nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0\nNOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0\nNOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0\nNOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0\nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"e4920028e2dd848015696ddfcacfa786c16605f9\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"d5a7dc456ede0edf6350f8cd7ff9f719\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1213, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761087.24-274813996358616/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-config : set fsid fact when generate_fsid = true] *******************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102\nMonday 20 August 2018 06:31:29 -0400 (0:00:02.059) 0:02:09.803 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure public_network configured] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.051) 0:02:09.855 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure cluster_network configured] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.046) 0:02:09.902 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure journal_size configured] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.047) 0:02:09.949 ********* \nok: [ceph-0] => {\n \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"\n}\n\nTASK [ceph-osd : make sure an osd scenario was chosen] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.085) 0:02:10.035 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure a valid osd scenario was chosen] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.062) 0:02:10.098 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify devices have been provided] ****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.055) 0:02:10.153 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.073) 0:02:10.227 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify lvm_volumes have been provided] ************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.057) 0:02:10.284 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.053) 0:02:10.338 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the devices variable is a list] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.052) 0:02:10.390 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : verify dedicated devices have been provided] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.059) 0:02:10.450 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.053) 0:02:10.503 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.055) 0:02:10.559 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include system_tuning.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5\nMonday 20 August 2018 06:31:29 -0400 (0:00:00.048) 0:02:10.608 ********* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0\n\nTASK [ceph-osd : disable osd directory parsing by updatedb] ********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2\nMonday 20 August 2018 06:31:30 -0400 (0:00:00.079) 0:02:10.687 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : disable osd directory path in updatedb.conf] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11\nMonday 20 August 2018 06:31:30 -0400 (0:00:00.043) 0:02:10.731 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : create tmpfiles.d directory] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22\nMonday 20 August 2018 06:31:30 -0400 (0:00:00.050) 0:02:10.782 ********* \nok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}\n\nTASK [ceph-osd : disable transparent hugepage] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33\nMonday 20 August 2018 06:31:30 -0400 (0:00:00.323) 0:02:11.105 ********* \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761090.6-255378817369816/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : get default vm.min_free_kbytes] *******************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45\nMonday 20 August 2018 06:31:31 -0400 (0:00:00.604) 0:02:11.710 ********* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.004716\", \"end\": \"2018-08-20 10:31:31.242585\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:31:31.237869\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}\n\nTASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52\nMonday 20 August 2018 06:31:31 -0400 (0:00:00.234) 0:02:11.945 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}\n\nTASK [ceph-osd : apply operating system tuning] ********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56\nMonday 20 August 2018 06:31:31 -0400 (0:00:00.201) 0:02:12.147 ********* \nchanged: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}\nchanged: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}\nchanged: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}\nchanged: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}\nchanged: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}\n\nTASK [ceph-osd : install dependencies] *****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10\nMonday 20 August 2018 06:31:32 -0400 (0:00:01.135) 0:02:13.282 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include common.yml] *******************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18\nMonday 20 August 2018 06:31:32 -0400 (0:00:00.043) 0:02:13.326 ********* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0\n\nTASK [ceph-osd : create bootstrap-osd and osd directories] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2\nMonday 20 August 2018 06:31:32 -0400 (0:00:00.075) 0:02:13.402 ********* \nchanged: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\nok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-osd : copy ceph key(s) if needed] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15\nMonday 20 August 2018 06:31:33 -0400 (0:00:00.476) 0:02:13.878 ********* \nchanged: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"e32a66ddc038f6331ba8cd3a3e75084e\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761093.39-275011204224461/source\", \"state\": \"file\", \"uid\": 167}\nskipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2\nMonday 20 August 2018 06:31:33 -0400 (0:00:00.677) 0:02:14.555 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11\nMonday 20 August 2018 06:31:33 -0400 (0:00:00.043) 0:02:14.599 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20\nMonday 20 August 2018 06:31:33 -0400 (0:00:00.052) 0:02:14.651 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.053) 0:02:14.705 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.042) 0:02:14.748 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.051) 0:02:14.800 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.046) 0:02:14.847 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.037) 0:02:14.884 ********* \nok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.081) 0:02:14.966 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.048) 0:02:15.015 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.044) 0:02:15.060 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.047) 0:02:15.108 ********* \nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'20971520', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-08-20-06-12-42-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-08-20-06-12-42-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'7fbefd08-62e0-41fb-b85e-19cd4d681773', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'7fbefd08-62e0-41fb-b85e-19cd4d681773']}, u'sectors': u'20967391', u'start': u'4096', u'holders': [], u'size': u'10.00 GB'}}, u'holders': [], u'size': u'10.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-08-20-06-12-42-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-08-20-06-12-42-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"7fbefd08-62e0-41fb-b85e-19cd4d681773\"]}, \"sectors\": \"20967391\", \"sectorsize\": 512, \"size\": \"10.00 GB\", \"start\": \"4096\", \"uuid\": \"7fbefd08-62e0-41fb-b85e-19cd4d681773\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"20971520\", \"sectorsize\": \"512\", \"size\": \"10.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdc'}) => {\"changed\": false, \"item\": {\"key\": \"vdc\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vde'}) => {\"changed\": false, \"item\": {\"key\": \"vde\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdd'}) => {\"changed\": false, \"item\": {\"key\": \"vdd\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdf'}) => {\"changed\": false, \"item\": {\"key\": \"vdf\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : resolve dedicated device link(s)] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.097) 0:02:15.205 ********* \n\nTASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.045) 0:02:15.251 ********* \n\nTASK [ceph-osd : set_fact build final dedicated_devices list] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.043) 0:02:15.294 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : read information about the devices] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29\nMonday 20 August 2018 06:31:34 -0400 (0:00:00.043) 0:02:15.337 ********* \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\nok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}\n\nTASK [ceph-osd : check the partition status of the osd disks] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2\nMonday 20 August 2018 06:31:35 -0400 (0:00:00.995) 0:02:16.333 ********* \nok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007689\", \"end\": \"2018-08-20 10:31:35.856259\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:35.848570\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.007042\", \"end\": \"2018-08-20 10:31:36.012783\", \"failed_when_result\": false, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.005741\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006523\", \"end\": \"2018-08-20 10:31:36.167965\", \"failed_when_result\": false, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.161442\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.007191\", \"end\": \"2018-08-20 10:31:36.321282\", \"failed_when_result\": false, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.314091\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.007161\", \"end\": \"2018-08-20 10:31:36.467284\", \"failed_when_result\": false, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.460123\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : create gpt disk label] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11\nMonday 20 August 2018 06:31:36 -0400 (0:00:00.832) 0:02:17.166 ********* \nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-08-20 10:31:35.856259', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdb', u'delta': u'0:00:00.007689', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:35.848570', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.011701\", \"end\": \"2018-08-20 10:31:36.716464\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007689\", \"end\": \"2018-08-20 10:31:35.856259\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:35.848570\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:36.704763\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdc'], u'end': u'2018-08-20 10:31:36.012783', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdc', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdc', u'delta': u'0:00:00.007042', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:36.005741', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdc']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdc\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.009932\", \"end\": \"2018-08-20 10:31:36.895035\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.007042\", \"end\": \"2018-08-20 10:31:36.012783\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.005741\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:36.885103\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdd'], u'end': u'2018-08-20 10:31:36.167965', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdd', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdd', u'delta': u'0:00:00.006523', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:36.161442', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdd']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdd\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008962\", \"end\": \"2018-08-20 10:31:37.073108\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006523\", \"end\": \"2018-08-20 10:31:36.167965\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.161442\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:37.064146\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vde'], u'end': u'2018-08-20 10:31:36.321282', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vde', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vde', u'delta': u'0:00:00.007191', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:36.314091', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vde']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vde\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.006835\", \"end\": \"2018-08-20 10:31:37.232430\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.007191\", \"end\": \"2018-08-20 10:31:36.321282\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.314091\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:37.225595\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\nok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdf'], u'end': u'2018-08-20 10:31:36.467284', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdf', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdf', u'delta': u'0:00:00.007161', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:36.460123', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdf']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdf\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.006684\", \"end\": \"2018-08-20 10:31:37.388601\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.007161\", \"end\": \"2018-08-20 10:31:36.467284\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.460123\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:37.381917\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : include scenarios/collocated.yml] *****************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41\nMonday 20 August 2018 06:31:37 -0400 (0:00:00.933) 0:02:18.099 ********* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0\n\nTASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5\nMonday 20 August 2018 06:31:37 -0400 (0:00:00.096) 0:02:18.196 ********* \nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.526070\", \"end\": \"2018-08-20 10:31:44.256951\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:37.730881\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:37'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:31:37 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid e4536f11-dd7a-409d-aa66-ee7ff961b6b2 /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:e4536f11-dd7a-409d-aa66-ee7ff961b6b2 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:05d3f79b-203c-4ff3-a357-964440c16877 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.Zzy7DS with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.Zzy7DS\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Zzy7DS\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Zzy7DS\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/ceph_fsid.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/ceph_fsid.19072.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/fsid.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/fsid.19072.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/magic.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/magic.19072.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/journal_uuid.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/journal_uuid.19072.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Zzy7DS/journal -> /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/type.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/type.19072.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.Zzy7DS\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Zzy7DS\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:37'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:31:37 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid e4536f11-dd7a-409d-aa66-ee7ff961b6b2 /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:e4536f11-dd7a-409d-aa66-ee7ff961b6b2 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:05d3f79b-203c-4ff3-a357-964440c16877 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.Zzy7DS with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.Zzy7DS\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Zzy7DS\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Zzy7DS\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/ceph_fsid.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/ceph_fsid.19072.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/fsid.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/fsid.19072.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/magic.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/magic.19072.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/journal_uuid.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/journal_uuid.19072.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Zzy7DS/journal -> /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/type.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/type.19072.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.Zzy7DS\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Zzy7DS\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:31:37 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:31:37 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:31:37 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:31:37 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-08-20 10:31:37 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:31:37 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:31:37 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:31:37 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:31:37 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-08-20 10:31:37 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdc -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdc -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.363569\", \"end\": \"2018-08-20 10:31:50.777600\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:44.414031\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:44'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:31:44 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 424551cb-046e-4505-a66a-438dfc9d8634 /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:424551cb-046e-4505-a66a-438dfc9d8634 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdc\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:51d3baa7-0bd1-40d9-aba5-61e421e4e282 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdc1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\\nmount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.1LWkaV with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.1LWkaV\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.1LWkaV\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.1LWkaV\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/ceph_fsid.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/ceph_fsid.19336.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/fsid.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/fsid.19336.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/magic.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/magic.19336.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/journal_uuid.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/journal_uuid.19336.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.1LWkaV/journal -> /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/type.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/type.19336.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.1LWkaV\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.1LWkaV\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:44'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:31:44 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 424551cb-046e-4505-a66a-438dfc9d8634 /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:424551cb-046e-4505-a66a-438dfc9d8634 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdc\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:51d3baa7-0bd1-40d9-aba5-61e421e4e282 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdc1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\", \"mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.1LWkaV with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.1LWkaV\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.1LWkaV\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.1LWkaV\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/ceph_fsid.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/ceph_fsid.19336.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/fsid.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/fsid.19336.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/magic.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/magic.19336.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/journal_uuid.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/journal_uuid.19336.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.1LWkaV/journal -> /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/type.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/type.19336.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.1LWkaV\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.1LWkaV\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:31:44 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:31:44 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:31:44 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:31:44 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdc\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-08-20 10:31:44 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdc2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdc1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:31:44 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:31:44 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:31:44 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:31:44 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdc\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-08-20 10:31:44 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdc2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdc1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdd -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdd -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.374018\", \"end\": \"2018-08-20 10:31:57.322381\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:50.948363\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:51'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:31:51 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 252b6b36-a52a-4a4f-820c-362379283e95 /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:252b6b36-a52a-4a4f-820c-362379283e95 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdd\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:cafd64c3-82a1-4313-b1a3-a1926402114d --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdd1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\\nmount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.HvU_2j with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.HvU_2j\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.HvU_2j\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.HvU_2j\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/ceph_fsid.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/ceph_fsid.19592.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/fsid.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/fsid.19592.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/magic.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/magic.19592.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/journal_uuid.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/journal_uuid.19592.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.HvU_2j/journal -> /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/type.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/type.19592.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.HvU_2j\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.HvU_2j\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:51'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:31:51 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 252b6b36-a52a-4a4f-820c-362379283e95 /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:252b6b36-a52a-4a4f-820c-362379283e95 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdd\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:cafd64c3-82a1-4313-b1a3-a1926402114d --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdd1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\", \"mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.HvU_2j with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.HvU_2j\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.HvU_2j\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.HvU_2j\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/ceph_fsid.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/ceph_fsid.19592.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/fsid.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/fsid.19592.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/magic.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/magic.19592.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/journal_uuid.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/journal_uuid.19592.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.HvU_2j/journal -> /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/type.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/type.19592.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.HvU_2j\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.HvU_2j\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:31:51 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:31:51 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:31:51 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:31:51 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdd\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-08-20 10:31:51 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdd2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdd1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:31:51 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:31:51 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:31:51 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:31:51 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdd\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-08-20 10:31:51 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdd2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdd1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vde -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vde -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.558030\", \"end\": \"2018-08-20 10:32:04.057449\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:57.499419\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:57'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:31:57 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 3962bf57-ff8b-4c96-ae23-ec662ba06977 /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_type: Will colocate journal with data on /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:3962bf57-ff8b-4c96-ae23-ec662ba06977 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vde\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:951e27f1-a8eb-4e7c-8d54-e78da591a6b7 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vde1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\\nmount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.shxyGX with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.shxyGX\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.shxyGX\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.shxyGX\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/ceph_fsid.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/ceph_fsid.19853.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/fsid.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/fsid.19853.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/magic.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/magic.19853.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/journal_uuid.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/journal_uuid.19853.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.shxyGX/journal -> /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/type.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/type.19853.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.shxyGX\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.shxyGX\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:57'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:31:57 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 3962bf57-ff8b-4c96-ae23-ec662ba06977 /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:3962bf57-ff8b-4c96-ae23-ec662ba06977 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vde\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:951e27f1-a8eb-4e7c-8d54-e78da591a6b7 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vde1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\", \"mount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.shxyGX with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.shxyGX\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.shxyGX\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.shxyGX\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/ceph_fsid.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/ceph_fsid.19853.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/fsid.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/fsid.19853.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/magic.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/magic.19853.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/journal_uuid.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/journal_uuid.19853.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.shxyGX/journal -> /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/type.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/type.19853.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.shxyGX\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.shxyGX\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:31:57 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:31:57 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:31:57 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:31:57 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vde\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.uVYdndgfid' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-08-20 10:31:57 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vde2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vde1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:31:57 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:31:57 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:31:57 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:31:57 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vde\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.uVYdndgfid' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-08-20 10:31:57 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vde2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vde1' from root:disk to ceph:ceph\"]}\nchanged: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdf -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdf -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.552197\", \"end\": \"2018-08-20 10:32:10.770316\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-08-20 10:32:04.218119\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:32:04'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:32:04 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid a0b92a62-97d2-44e1-9c14-6c834bffed36 /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:a0b92a62-97d2-44e1-9c14-6c834bffed36 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdf\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:9f041355-3e8c-4398-9922-f4b1641b83aa --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdf1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\\nmount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.ORoqlF with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.ORoqlF\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.ORoqlF\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.ORoqlF\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/ceph_fsid.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/ceph_fsid.20113.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/fsid.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/fsid.20113.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/magic.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/magic.20113.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/journal_uuid.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/journal_uuid.20113.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.ORoqlF/journal -> /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/type.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/type.20113.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.ORoqlF\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.ORoqlF\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:32:04'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:32:04 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid a0b92a62-97d2-44e1-9c14-6c834bffed36 /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:a0b92a62-97d2-44e1-9c14-6c834bffed36 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdf\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:9f041355-3e8c-4398-9922-f4b1641b83aa --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdf1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\", \"mount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.ORoqlF with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.ORoqlF\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.ORoqlF\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.ORoqlF\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/ceph_fsid.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/ceph_fsid.20113.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/fsid.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/fsid.20113.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/magic.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/magic.20113.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/journal_uuid.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/journal_uuid.20113.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.ORoqlF/journal -> /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/type.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/type.20113.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.ORoqlF\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.ORoqlF\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:32:04 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:32:04 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:32:04 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:32:04 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdf\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.uVYdndgfid' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.BUGam3YbjO' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-08-20 10:32:04 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdf2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdf1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:32:04 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:32:04 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:32:04 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:32:04 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdf\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.uVYdndgfid' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.BUGam3YbjO' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-08-20 10:32:04 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdf2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdf1' from root:disk to ceph:ceph\"]}\n\nTASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30\nMonday 20 August 2018 06:32:10 -0400 (0:00:33.312) 0:02:51.508 ********* \nskipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"item\": \"/dev/vdc\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"item\": \"/dev/vdd\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"item\": \"/dev/vde\", \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"item\": \"/dev/vdf\", \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53\nMonday 20 August 2018 06:32:10 -0400 (0:00:00.071) 0:02:51.580 ********* \nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/non-collocated.yml] *************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48\nMonday 20 August 2018 06:32:11 -0400 (0:00:00.106) 0:02:51.687 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include scenarios/lvm.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56\nMonday 20 August 2018 06:32:11 -0400 (0:00:00.049) 0:02:51.736 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include activate_osds.yml] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64\nMonday 20 August 2018 06:32:11 -0400 (0:00:00.045) 0:02:51.781 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include start_osds.yml] ***************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72\nMonday 20 August 2018 06:32:11 -0400 (0:00:00.047) 0:02:51.828 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : include docker/main.yml] **************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80\nMonday 20 August 2018 06:32:11 -0400 (0:00:00.045) 0:02:51.874 ********* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0\n\nTASK [ceph-osd : include start_docker_osd.yml] *********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2\nMonday 20 August 2018 06:32:11 -0400 (0:00:00.091) 0:02:51.965 ********* \nincluded: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0\n\nTASK [ceph-osd : umount ceph disk (if on openstack)] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4\nMonday 20 August 2018 06:32:11 -0400 (0:00:00.068) 0:02:52.034 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : test if the container image has the disk_list function] *******\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13\nMonday 20 August 2018 06:32:11 -0400 (0:00:00.051) 0:02:52.085 ********* \nok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-11\", \"disk_list.sh\"], \"delta\": \"0:00:00.300563\", \"end\": \"2018-08-20 10:32:11.907338\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:32:11.606775\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2ah/42d\\tInode: 5353940 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-07-06 17:29:14.000000000 +0000\\nModify: 2018-07-06 17:29:14.000000000 +0000\\nChange: 2018-08-20 10:31:15.775934684 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2ah/42d\\tInode: 5353940 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-07-06 17:29:14.000000000 +0000\", \"Modify: 2018-07-06 17:29:14.000000000 +0000\", \"Change: 2018-08-20 10:31:15.775934684 +0000\", \" Birth: -\"]}\n\nTASK [ceph-osd : generate ceph osd docker run script] **************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19\nMonday 20 August 2018 06:32:11 -0400 (0:00:00.521) 0:02:52.607 ********* \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"100bffd271ecfac88d5dd501d37dfca7b05f2102\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8f90e441a65774a9867e35ad6cde7f59\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 964, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761131.99-269983588461395/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : generate systemd unit file] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:30\nMonday 20 August 2018 06:32:12 -0400 (0:00:00.761) 0:02:53.368 ********* \nchanged: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761132.88-63545072408675/source\", \"state\": \"file\", \"uid\": 0}\n\nTASK [ceph-osd : systemd start osd container] **********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:41\nMonday 20 August 2018 06:32:13 -0400 (0:00:00.837) 0:02:54.206 ********* \nchanged: [ceph-0] => (item=/dev/vdb) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket system-ceph\\\\x5cx2dosd.slice docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vdc) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdc\", \"name\": \"ceph-osd@vdc\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"basic.target docker.service system-ceph\\\\x5cx2dosd.slice systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdc.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdc.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vdd) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdd\", \"name\": \"ceph-osd@vdd\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service basic.target system-ceph\\\\x5cx2dosd.slice systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdd.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdd.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vde) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vde\", \"name\": \"ceph-osd@vde\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service systemd-journald.socket system-ceph\\\\x5cx2dosd.slice basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vde.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vde.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\nchanged: [ceph-0] => (item=/dev/vdf) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdf\", \"name\": \"ceph-osd@vdf\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice docker.service systemd-journald.socket basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdf.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdf.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}\n\nTASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87\nMonday 20 August 2018 06:32:16 -0400 (0:00:02.945) 0:02:57.152 ********* \nskipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQB3kXpbAAAAABAAcCPNLLBq5L8h/sbL3v6wkQ==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQB3kXpbAAAAABAAcCPNLLBq5L8h/sbL3v6wkQ==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQB3kXpbAAAAABAAxER5sPH7n06jJRAeMBD9HQ==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQB3kXpbAAAAABAAxER5sPH7n06jJRAeMBD9HQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}\nskipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQB3kXpbAAAAABAAn7BFhvmwvmOaea/Tu5WRSA==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB3kXpbAAAAABAAn7BFhvmwvmOaea/Tu5WRSA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95\nMonday 20 August 2018 06:32:16 -0400 (0:00:00.079) 0:02:57.231 ********* \nskipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}\n\nTASK [ceph-osd : wait for all osd to be up] ************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2\nMonday 20 August 2018 06:32:16 -0400 (0:00:00.077) 0:02:57.309 ********* \nchanged: [ceph-0 -> 192.168.24.12] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.825873\", \"end\": \"2018-08-20 10:32:17.924597\", \"rc\": 0, \"start\": \"2018-08-20 10:32:17.098724\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : list existing pool(s)] ****************************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12\nMonday 20 August 2018 06:32:18 -0400 (0:00:01.389) 0:02:58.698 ********* \nchanged: [ceph-0 -> 192.168.24.12] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.393910\", \"end\": \"2018-08-20 10:32:18.699797\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:18.305887\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.12] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.372823\", \"end\": \"2018-08-20 10:32:19.287691\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:18.914868\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.12] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.342221\", \"end\": \"2018-08-20 10:32:19.833503\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:19.491282\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.12] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.352919\", \"end\": \"2018-08-20 10:32:20.395552\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:20.042633\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}\nchanged: [ceph-0 -> 192.168.24.12] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.323752\", \"end\": \"2018-08-20 10:32:20.905134\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:20.581382\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}\n\nTASK [ceph-osd : set_fact rule_name before luminous] ***************************\ntask path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21\nMonday 20 August 2018 06:32:20 -0400 (0:00:02.915) 0:03:01.613 ********* \nfatal: [ceph-0]: FAILED! => {\"msg\": \"The conditional check 'ceph_release_num[ceph_stable_release] < ceph_release_num['luminous']' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_stable_release] < ceph_release_num['luminous']): 'dict object' has no attribute u'dummy'\\n\\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml': line 21, column 3, but may\\nbe elsewhere in the file depending on the exact syntax problem.\\n\\nThe offending line appears to be:\\n\\n\\n- name: set_fact rule_name before luminous\\n ^ here\\n\"}\n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.032) 0:03:01.646 ********* \n\nRUNNING HANDLER [ceph-defaults : copy mon restart script] **********************\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.646 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.647 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.647 ********* \n\nRUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.648 ********* \n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.648 ********* \n\nRUNNING HANDLER [ceph-defaults : copy osd restart script] **********************\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.648 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.649 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.649 ********* \n\nRUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.649 ********* \n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.650 ********* \n\nRUNNING HANDLER [ceph-defaults : copy mds restart script] **********************\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.650 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.651 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.651 ********* \n\nRUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.651 ********* \n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.652 ********* \n\nRUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.652 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.652 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.653 ********* \n\nRUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.653 ********* \n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.653 ********* \n\nRUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.654 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.654 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.654 ********* \n\nRUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.655 ********* \n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.656 ********* \n\nRUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.656 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***\nMonday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.656 ********* \n\nRUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******\nMonday 20 August 2018 06:32:21 -0400 (0:00:00.000) 0:03:01.657 ********* \n\nRUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********\nMonday 20 August 2018 06:32:21 -0400 (0:00:00.000) 0:03:01.657 ********* \n\nPLAY RECAP *********************************************************************\nceph-0 : ok=68 changed=15 unreachable=0 failed=1 \ncompute-0 : ok=2 changed=0 unreachable=0 failed=0 \ncontroller-0 : ok=121 changed=22 unreachable=0 failed=0 \n\n\nINSTALLER STATUS ***************************************************************\nInstall Ceph Monitor : Complete (0:01:01)\nInstall Ceph Manager : Complete (0:00:24)\nInstall Ceph OSD : In Progress (0:01:24)\n\tThis phase can be restarted by running: roles/ceph-osd/tasks/main.yml\n\nMonday 20 August 2018 06:32:21 -0400 (0:00:00.004) 0:03:01.662 ********* \n=============================================================================== ", "stdout_lines": ["ansible-playbook 2.5.7", " config file = /usr/share/ceph-ansible/ansible.cfg", " configured module search path = [u'/usr/share/ceph-ansible/library']", " ansible python module location = /usr/lib/python2.7/site-packages/ansible", " executable location = /usr/bin/ansible-playbook", " python version = 2.7.5 (default, May 31 2018, 09:41:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]", "Using /usr/share/ceph-ansible/ansible.cfg as config file", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/non_containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-mds/tasks/containerized.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rgw/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/pre_requisite_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/ganesha_selinux_fix.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-nfs/tasks/start_nfs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/common.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/start_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/configure_mirroring.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-rbd-mirror/tasks/docker/start_docker_rbd_mirror.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/start_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/main.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/copy_configs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-restapi/tasks/docker/start_docker_restapi.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_cluster.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/debian_prerequisites.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/pre_requisite.yml", "statically imported: /usr/share/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml", "", "PLAYBOOK: site-docker.yml.sample ***********************************************", "12 plays in /usr/share/ceph-ansible/site-docker.yml.sample", "", "PLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,iscsigws,mgrs] ***", "", "TASK [gather facts] ************************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:24", "Monday 20 August 2018 06:29:19 -0400 (0:00:00.200) 0:00:00.200 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [gather and delegate facts] ***********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:29", "Monday 20 August 2018 06:29:19 -0400 (0:00:00.083) 0:00:00.283 ********* ", "ok: [controller-0 -> 192.168.24.13] => (item=compute-0)", "ok: [controller-0 -> 192.168.24.12] => (item=controller-0)", "ok: [controller-0 -> 192.168.24.16] => (item=ceph-0)", "", "TASK [check if it is atomic host] **********************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:38", "Monday 20 August 2018 06:29:31 -0400 (0:00:12.110) 0:00:12.394 ********* ", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "ok: [compute-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [set_fact is_atomic] ******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:45", "Monday 20 August 2018 06:29:32 -0400 (0:00:00.521) 0:00:12.915 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "ok: [compute-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "TASK [pull rhceph image] *******************************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:66", "Monday 20 August 2018 06:29:32 -0400 (0:00:00.175) 0:00:13.090 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "skipping: [compute-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:76", "Monday 20 August 2018 06:29:32 -0400 (0:00:00.115) 0:00:13.205 ********* ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"start\": \"20180820062932Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Monday 20 August 2018 06:29:32 -0400 (0:00:00.169) 0:00:13.375 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.028304\", \"end\": \"2018-08-20 10:29:33.155728\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:29:33.127424\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.560) 0:00:13.935 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.049) 0:00:13.984 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.047) 0:00:14.032 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.045) 0:00:14.077 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.023435\", \"end\": \"2018-08-20 10:29:33.631166\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:29:33.607731\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.257) 0:00:14.334 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.048) 0:00:14.383 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.047) 0:00:14.430 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.045) 0:00:14.475 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.045) 0:00:14.520 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.046) 0:00:14.566 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Monday 20 August 2018 06:29:33 -0400 (0:00:00.046) 0:00:14.612 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.067) 0:00:14.680 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.047) 0:00:14.728 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.045) 0:00:14.774 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.050) 0:00:14.824 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.043) 0:00:14.867 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.057) 0:00:14.925 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.058) 0:00:14.983 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.053) 0:00:15.037 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.060) 0:00:15.098 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.057) 0:00:15.155 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.053) 0:00:15.208 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.070) 0:00:15.279 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.058) 0:00:15.338 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.050) 0:00:15.388 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.047) 0:00:15.435 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.046) 0:00:15.481 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Monday 20 August 2018 06:29:34 -0400 (0:00:00.046) 0:00:15.528 ********* ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Monday 20 August 2018 06:29:35 -0400 (0:00:00.211) 0:00:15.740 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Monday 20 August 2018 06:29:35 -0400 (0:00:00.073) 0:00:15.813 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Monday 20 August 2018 06:29:35 -0400 (0:00:00.079) 0:00:15.893 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Monday 20 August 2018 06:29:35 -0400 (0:00:00.071) 0:00:15.965 ********* ", "ok: [controller-0 -> 192.168.24.12] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Monday 20 August 2018 06:29:35 -0400 (0:00:00.136) 0:00:16.101 ********* ", "ok: [controller-0 -> 192.168.24.12] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.026050\", \"end\": \"2018-08-20 10:29:35.659042\", \"failed_when_result\": false, \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-08-20 10:29:35.632992\", \"stderr\": \"Error response from daemon: No such container: ceph-mon-controller-0\", \"stderr_lines\": [\"Error response from daemon: No such container: ceph-mon-controller-0\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Monday 20 August 2018 06:29:35 -0400 (0:00:00.265) 0:00:16.367 ********* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Monday 20 August 2018 06:29:35 -0400 (0:00:00.198) 0:00:16.566 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Monday 20 August 2018 06:29:35 -0400 (0:00:00.058) 0:00:16.625 ********* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.387) 0:00:17.012 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.052) 0:00:17.065 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.076) 0:00:17.142 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.048) 0:00:17.190 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.052) 0:00:17.242 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.044) 0:00:17.287 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.047) 0:00:17.335 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.079) 0:00:17.414 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.043) 0:00:17.457 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.081) 0:00:17.539 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Monday 20 August 2018 06:29:36 -0400 (0:00:00.075) 0:00:17.615 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.078) 0:00:17.693 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.050) 0:00:17.744 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.049) 0:00:17.793 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.045) 0:00:17.839 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.045) 0:00:17.885 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.044) 0:00:17.929 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.046) 0:00:17.976 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.047) 0:00:18.024 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : get current cluster status (if already running)] *********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:219", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.166) 0:00:18.190 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:223", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.122) 0:00:18.312 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:227", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.044) 0:00:18.357 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:237", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.052) 0:00:18.410 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"rgw_hostname\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.070) 0:00:18.480 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Monday 20 August 2018 06:29:37 -0400 (0:00:00.069) 0:00:18.550 ********* ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [controller-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Monday 20 August 2018 06:29:39 -0400 (0:00:02.006) 0:00:20.556 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Monday 20 August 2018 06:29:39 -0400 (0:00:00.047) 0:00:20.604 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Monday 20 August 2018 06:29:40 -0400 (0:00:00.056) 0:00:20.660 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Monday 20 August 2018 06:29:40 -0400 (0:00:00.044) 0:00:20.705 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Monday 20 August 2018 06:29:40 -0400 (0:00:00.045) 0:00:20.750 ********* ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Monday 20 August 2018 06:29:40 -0400 (0:00:00.382) 0:00:21.133 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Monday 20 August 2018 06:29:40 -0400 (0:00:00.088) 0:00:21.221 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Monday 20 August 2018 06:29:40 -0400 (0:00:00.041) 0:00:21.263 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.020137\", \"end\": \"2018-08-20 10:29:40.795882\", \"rc\": 0, \"start\": \"2018-08-20 10:29:40.775745\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Monday 20 August 2018 06:29:40 -0400 (0:00:00.233) 0:00:21.497 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Monday 20 August 2018 06:29:40 -0400 (0:00:00.075) 0:00:21.572 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.022275\", \"end\": \"2018-08-20 10:29:41.105471\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:29:41.083196\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Monday 20 August 2018 06:29:41 -0400 (0:00:00.233) 0:00:21.806 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Monday 20 August 2018 06:29:41 -0400 (0:00:00.098) 0:00:21.904 ********* ", "ok: [controller-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Monday 20 August 2018 06:29:41 -0400 (0:00:00.133) 0:00:22.037 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Monday 20 August 2018 06:29:41 -0400 (0:00:00.084) 0:00:22.122 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Monday 20 August 2018 06:29:41 -0400 (0:00:00.108) 0:00:22.231 ********* ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Monday 20 August 2018 06:29:42 -0400 (0:00:01.170) 0:00:23.401 ********* ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.269) 0:00:23.671 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.042) 0:00:23.713 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.040) 0:00:23.754 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.046) 0:00:23.801 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.055) 0:00:23.857 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.048) 0:00:23.905 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.046) 0:00:23.951 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.052) 0:00:24.004 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.058) 0:00:24.063 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.058) 0:00:24.121 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.047) 0:00:24.169 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.043) 0:00:24.213 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.044) 0:00:24.257 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.051) 0:00:24.308 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.056) 0:00:24.365 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.046) 0:00:24.412 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.048) 0:00:24.461 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.044) 0:00:24.505 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.044) 0:00:24.549 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Monday 20 August 2018 06:29:43 -0400 (0:00:00.054) 0:00:24.603 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.059) 0:00:24.663 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.047) 0:00:24.711 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.047) 0:00:24.758 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.050) 0:00:24.809 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.048) 0:00:24.857 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.058) 0:00:24.916 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.049) 0:00:24.965 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.049) 0:00:25.015 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.053) 0:00:25.068 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-11 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Monday 20 August 2018 06:29:44 -0400 (0:00:00.052) 0:00:25.121 ********* ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:13.452881\", \"end\": \"2018-08-20 10:29:58.205484\", \"rc\": 0, \"start\": \"2018-08-20 10:29:44.752603\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-11: Pulling from 192.168.24.1:8787/rhceph\\nd02c3bd49e78: Pulling fs layer\\n475b0168c252: Pulling fs layer\\n9cc28bc5e4f9: Pulling fs layer\\n475b0168c252: Download complete\\nd02c3bd49e78: Download complete\\n9cc28bc5e4f9: Verifying Checksum\\n9cc28bc5e4f9: Download complete\\nd02c3bd49e78: Pull complete\\n475b0168c252: Pull complete\\n9cc28bc5e4f9: Pull complete\\nDigest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-11\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-11: Pulling from 192.168.24.1:8787/rhceph\", \"d02c3bd49e78: Pulling fs layer\", \"475b0168c252: Pulling fs layer\", \"9cc28bc5e4f9: Pulling fs layer\", \"475b0168c252: Download complete\", \"d02c3bd49e78: Download complete\", \"9cc28bc5e4f9: Verifying Checksum\", \"9cc28bc5e4f9: Download complete\", \"d02c3bd49e78: Pull complete\", \"475b0168c252: Pull complete\", \"9cc28bc5e4f9: Pull complete\", \"Digest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-11\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-11 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Monday 20 August 2018 06:29:58 -0400 (0:00:13.795) 0:00:38.917 ********* ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:00.024836\", \"end\": \"2018-08-20 10:29:58.583464\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:29:58.558628\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-11\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 616048717,\\n \\\"VirtualSize\\\": 616048717,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\\n \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\\n \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-11\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 616048717,\", \" \\\"VirtualSize\\\": 616048717,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\", \" \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\", \" \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Monday 20 August 2018 06:29:58 -0400 (0:00:00.381) 0:00:39.299 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Monday 20 August 2018 06:29:58 -0400 (0:00:00.209) 0:00:39.509 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Monday 20 August 2018 06:29:58 -0400 (0:00:00.123) 0:00:39.633 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.050) 0:00:39.684 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.048) 0:00:39.732 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.049) 0:00:39.782 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.050) 0:00:39.833 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.044) 0:00:39.878 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.051) 0:00:39.930 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.043) 0:00:39.974 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.043) 0:00:40.017 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.045) 0:00:40.062 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Monday 20 August 2018 06:29:59 -0400 (0:00:00.052) 0:00:40.114 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-11\", \"--version\"], \"delta\": \"0:00:00.465469\", \"end\": \"2018-08-20 10:30:00.105336\", \"rc\": 0, \"start\": \"2018-08-20 10:29:59.639867\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Monday 20 August 2018 06:30:00 -0400 (0:00:00.699) 0:00:40.814 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-30.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Monday 20 August 2018 06:30:00 -0400 (0:00:00.076) 0:00:40.890 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Monday 20 August 2018 06:30:00 -0400 (0:00:00.050) 0:00:40.940 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Monday 20 August 2018 06:30:00 -0400 (0:00:00.047) 0:00:40.988 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Monday 20 August 2018 06:30:00 -0400 (0:00:00.083) 0:00:41.072 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Monday 20 August 2018 06:30:00 -0400 (0:00:00.050) 0:00:41.122 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Monday 20 August 2018 06:30:00 -0400 (0:00:00.047) 0:00:41.170 ********* ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Monday 20 August 2018 06:30:01 -0400 (0:00:00.879) 0:00:42.050 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Monday 20 August 2018 06:30:01 -0400 (0:00:00.055) 0:00:42.105 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Monday 20 August 2018 06:30:01 -0400 (0:00:00.054) 0:00:42.160 ********* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 6, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Monday 20 August 2018 06:30:01 -0400 (0:00:00.252) 0:00:42.413 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Monday 20 August 2018 06:30:01 -0400 (0:00:00.053) 0:00:42.467 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Monday 20 August 2018 06:30:01 -0400 (0:00:00.048) 0:00:42.515 ********* ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Monday 20 August 2018 06:30:02 -0400 (0:00:00.234) 0:00:42.749 ********* ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"ad274129acdf99bf79681112519249b5cd433cfc\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"d12c4a40219f2d53aebea240077fc57d\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761002.14-187001174316155/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Monday 20 August 2018 06:30:04 -0400 (0:00:02.373) 0:00:45.122 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:2", "Monday 20 August 2018 06:30:04 -0400 (0:00:00.064) 0:00:45.187 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mon : make sure monitor_interface or monitor_address or monitor_address_block is configured] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:2", "Monday 20 August 2018 06:30:04 -0400 (0:00:00.200) 0:00:45.388 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate monitor initial keyring] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:2", "Monday 20 August 2018 06:30:04 -0400 (0:00:00.066) 0:00:45.454 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : read monitor initial keyring if it already exists] ************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:11", "Monday 20 August 2018 06:30:04 -0400 (0:00:00.066) 0:00:45.521 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create monitor initial keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22", "Monday 20 August 2018 06:30:04 -0400 (0:00:00.051) 0:00:45.572 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set initial monitor key permissions] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:34", "Monday 20 August 2018 06:30:04 -0400 (0:00:00.051) 0:00:45.624 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create (and fix ownership of) monitor directory] **************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:42", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.050) 0:00:45.675 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap >= ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:51", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.061) 0:00:45.737 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact client_admin_ceph_authtool_cap < ceph_release_num.luminous] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:63", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.066) 0:00:45.803 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create custom admin keyring] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:74", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.261) 0:00:46.065 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set ownership of admin keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:88", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.060) 0:00:46.125 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : import admin keyring into mon keyring] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:99", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.050) 0:00:46.176 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs with keyring] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:106", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.050) 0:00:46.227 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ceph monitor mkfs without keyring] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:113", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.044) 0:00:46.272 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:2", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.052) 0:00:46.325 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add ceph-mon systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:10", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.049) 0:00:46.375 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : start the monitor service] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:20", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.056) 0:00:46.431 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : enable the ceph-mon.target service] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/start_monitor.yml:29", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.061) 0:00:46.493 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : include ceph_keys.yml] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/main.yml:19", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.063) 0:00:46.556 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : collect all the pools] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:2", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.049) 0:00:46.605 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : secure the cluster] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/secure_cluster.yml:7", "Monday 20 August 2018 06:30:05 -0400 (0:00:00.048) 0:00:46.654 ********* ", "", "TASK [ceph-mon : set_fact ceph_config_keys] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:2", "Monday 20 August 2018 06:30:06 -0400 (0:00:00.064) 0:00:46.718 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : register rbd bootstrap key] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:11", "Monday 20 August 2018 06:30:06 -0400 (0:00:00.086) 0:00:46.805 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"bootstrap_rbd_keyring\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : merge rbd bootstrap key to config and keys paths] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:17", "Monday 20 August 2018 06:30:06 -0400 (0:00:00.083) 0:00:46.888 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-mon : stat for ceph config and keys] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:22", "Monday 20 August 2018 06:30:06 -0400 (0:00:00.091) 0:00:46.980 ********* ", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}", "ok: [controller-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}", "", "TASK [ceph-mon : try to copy ceph keys] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/copy_configs.yml:33", "Monday 20 August 2018 06:30:07 -0400 (0:00:00.983) 0:00:47.963 ********* ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with default ceph.conf] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:2", "Monday 20 August 2018 06:30:07 -0400 (0:00:00.146) 0:00:48.110 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : populate kv_store with custom ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:18", "Monday 20 August 2018 06:30:07 -0400 (0:00:00.052) 0:00:48.163 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : delete populate-kv-store docker] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:36", "Monday 20 August 2018 06:30:07 -0400 (0:00:00.048) 0:00:48.211 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:43", "Monday 20 August 2018 06:30:07 -0400 (0:00:00.047) 0:00:48.259 ********* ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"1fd7e13e28ace96222549265cb506432639d6b8b\", \"dest\": \"/etc/systemd/system/ceph-mon@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"2d20afce9a3de8ef54fb3f294f9f63d7\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 887, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761007.64-194225143353676/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : systemd start mon container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/start_docker_monitor.yml:54", "Monday 20 August 2018 06:30:08 -0400 (0:00:00.865) 0:00:49.125 ********* ", "changed: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mon@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket basic.target system-ceph\\\\x5cx2dmon.slice docker.service\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Monitor\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --name ceph-mon-%i --memory=3g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro --net=host -e IP_VERSION=4 -e MON_IP=172.17.3.14 -e CLUSTER=ceph -e FSID=00d03b50-a460-11e8-8cf1-525400721501 -e CEPH_PUBLIC_NETWORK=172.17.3.0/24 -e CEPH_DAEMON=MON 192.168.24.1:8787/rhceph:3-11 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mon-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/bin/rm ; argv[]=/bin/rm -f /var/run/ceph/ceph-mon.controller-0.asok ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mon@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mon@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127799\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127799\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mon@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmon.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmon.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mon : configure ceph profile.d aliases] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/configure_ceph_command_aliases.yml:2", "Monday 20 August 2018 06:30:09 -0400 (0:00:00.716) 0:00:49.841 ********* ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"78965c7dfcde4827c1cb8645bc7a444472e87718\", \"dest\": \"/etc/profile.d/ceph-aliases.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"66a9bfe5c26a22ade3c67cc7c7a58d2c\", \"mode\": \"0755\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:bin_t:s0\", \"size\": 375, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761009.23-195696500984284/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mon : wait for monitor socket to exist] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:12", "Monday 20 August 2018 06:30:09 -0400 (0:00:00.536) 0:00:50.378 ********* ", "FAILED - RETRYING: wait for monitor socket to exist (5 retries left).", "changed: [controller-0] => {\"attempts\": 2, \"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"sh\", \"-c\", \"stat /var/run/ceph/ceph-mon.controller-0.asok || stat /var/run/ceph/ceph-mon.controller-0.localdomain.asok\"], \"delta\": \"0:00:00.078752\", \"end\": \"2018-08-20 10:30:25.236221\", \"rc\": 0, \"start\": \"2018-08-20 10:30:25.157469\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: '/var/run/ceph/ceph-mon.controller-0.asok'\\n Size: 0 \\tBlocks: 0 IO Block: 4096 socket\\nDevice: 14h/20d\\tInode: 326382 Links: 1\\nAccess: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\\nAccess: 2018-08-20 10:30:10.124988857 +0000\\nModify: 2018-08-20 10:30:10.124988857 +0000\\nChange: 2018-08-20 10:30:10.124988857 +0000\\n Birth: -\", \"stdout_lines\": [\" File: '/var/run/ceph/ceph-mon.controller-0.asok'\", \" Size: 0 \\tBlocks: 0 IO Block: 4096 socket\", \"Device: 14h/20d\\tInode: 326382 Links: 1\", \"Access: (0755/srwxr-xr-x) Uid: ( 167/ ceph) Gid: ( 167/ ceph)\", \"Access: 2018-08-20 10:30:10.124988857 +0000\", \"Modify: 2018-08-20 10:30:10.124988857 +0000\", \"Change: 2018-08-20 10:30:10.124988857 +0000\", \" Birth: -\"]}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:19", "Monday 20 August 2018 06:30:25 -0400 (0:00:15.562) 0:01:05.940 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:29", "Monday 20 August 2018 06:30:25 -0400 (0:00:00.097) 0:01:06.038 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv4 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:39", "Monday 20 August 2018 06:30:25 -0400 (0:00:00.088) 0:01:06.126 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--admin-daemon\", \"/var/run/ceph/ceph-mon.controller-0.asok\", \"add_bootstrap_peer_hint\", \"172.17.3.14\"], \"delta\": \"0:00:00.175474\", \"end\": \"2018-08-20 10:30:25.913019\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:25.737545\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"mon already active; ignoring bootstrap hint\", \"stdout_lines\": [\"mon already active; ignoring bootstrap hint\"]}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_interface] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:49", "Monday 20 August 2018 06:30:25 -0400 (0:00:00.489) 0:01:06.615 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:59", "Monday 20 August 2018 06:30:26 -0400 (0:00:00.053) 0:01:06.669 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : ipv6 - force peer addition as potential bootstrap peer for cluster bringup - monitor_address_block] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:69", "Monday 20 August 2018 06:30:26 -0400 (0:00:00.054) 0:01:06.723 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : push ceph files to the ansible server] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/fetch_configs.yml:2", "Monday 20 August 2018 06:30:26 -0400 (0:00:00.051) 0:01:06.775 ********* ", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": true, \"checksum\": \"32793d89de7819833a3849e42af57849c578f1ee\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/etc/ceph/ceph.client.admin.keyring\", \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"f4b124585db38fc16abb99f1a1324648\", \"remote_checksum\": \"32793d89de7819833a3849e42af57849c578f1ee\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": true, \"checksum\": \"924bb9cec4772c247782ec43a790040656d3ab31\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/etc/ceph/ceph.mon.keyring\", \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"7ef3261179ff3f34a66f8517502d80f2\", \"remote_checksum\": \"924bb9cec4772c247782ec43a790040656d3ab31\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"e32a66ddc038f6331ba8cd3a3e75084e\", \"remote_checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"f361c1725afb0640dd7f85ed53589f84\", \"remote_checksum\": \"5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"0b81209fa4aacb4370dae6fcb06b8a43d48ed42d\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"59f9cbc63af613abd9891519100e0820\", \"remote_checksum\": \"0b81209fa4aacb4370dae6fcb06b8a43d48ed42d\", \"remote_md5sum\": null}", "changed: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": true, \"checksum\": \"963a0d4350677a12a72614a09b2996d236b0a6d6\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"exists\": false}}], \"md5sum\": \"b92b9cf2d12e29b7c950a7ba01356a77\", \"remote_checksum\": \"963a0d4350677a12a72614a09b2996d236b0a6d6\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : create ceph rest api keyring when mon is containerized] *******", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:84", "Monday 20 August 2018 06:30:27 -0400 (0:00:01.313) 0:01:08.088 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create ceph mgr keyring(s) when mon is containerized] *********", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:97", "Monday 20 August 2018 06:30:27 -0400 (0:00:00.057) 0:01:08.146 ********* ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"auth\", \"get-or-create\", \"mgr.controller-0\", \"mon\", \"allow profile mgr\", \"osd\", \"allow *\", \"mds\", \"allow *\", \"-o\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"], \"delta\": \"0:00:00.328956\", \"end\": \"2018-08-20 10:30:28.223470\", \"item\": \"controller-0\", \"rc\": 0, \"start\": \"2018-08-20 10:30:27.894514\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-mon : stat for ceph mgr key(s)] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:109", "Monday 20 August 2018 06:30:28 -0400 (0:00:00.782) 0:01:08.928 ********* ", "ok: [controller-0] => (item=controller-0) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"controller-0\", \"stat\": {\"atime\": 1534761028.0999975, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"ctime\": 1534761028.2069974, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 77909386, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1534761028.2069974, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"2012420224\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-mon : fetch ceph mgr key(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/docker/main.yml:121", "Monday 20 August 2018 06:30:28 -0400 (0:00:00.398) 0:01:09.327 ********* ", "changed: [controller-0] => (item={'_ansible_parsed': True, u'stat': {u'charset': u'us-ascii', u'uid': 0, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761028.2069974, u'block_size': 4096, u'inode': 77909386, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': u'2012420224', u'pw_name': u'root', u'gid': 0, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'root', u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1534761028.0999975, u'mimetype': u'text/plain', u'ctime': 1534761028.2069974, u'isblk': False, u'checksum': u'557a22485a6e0bcdb875a5f5926bdb3409555b7d', u'dev': 64514, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, 'failed': False, u'changed': False, 'item': u'controller-0', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'controller-0'}) => {\"changed\": true, \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"dest\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501/etc/ceph/ceph.mgr.controller-0.keyring\", \"item\": {\"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"controller-0\", \"stat\": {\"atime\": 1534761028.0999975, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"us-ascii\", \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"ctime\": 1534761028.2069974, \"dev\": 64514, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 0, \"gr_name\": \"root\", \"inode\": 77909386, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"text/plain\", \"mode\": \"0644\", \"mtime\": 1534761028.2069974, \"nlink\": 1, \"path\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"root\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 0, \"version\": \"2012420224\", \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}, \"md5sum\": \"82352f6d3d5aac744c3838aa345e1f7c\", \"remote_checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"remote_md5sum\": null}", "", "TASK [ceph-mon : configure crush hierarchy] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:2", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.417) 0:01:09.744 ********* ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : create configured crush rules] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:14", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.065) 0:01:09.809 ********* ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get id for new default crush rule] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:21", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.074) 0:01:09.883 ********* ", "skipping: [controller-0] => (item={u'default': False, u'root': u'HDD', u'type': u'host', u'name': u'HDD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={u'default': False, u'root': u'SSD', u'type': u'host', u'name': u'SSD'}) => {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact info_ceph_default_crush_rule_yaml] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:33", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.077) 0:01:09.961 ********* ", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'HDD', u'name': u'HDD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"HDD\", \"root\": \"HDD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item={'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}, 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': {u'default': False, u'type': u'host', u'root': u'SSD', u'name': u'SSD'}}) => {\"changed\": false, \"item\": {\"changed\": false, \"item\": {\"default\": false, \"name\": \"SSD\", \"root\": \"SSD\", \"type\": \"host\"}, \"skip_reason\": \"Conditional result was False\", \"skipped\": true}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_crush_rule to osd_pool_default_crush_replicated_ruleset if release < luminous else osd_pool_default_crush_rule] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:41", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.073) 0:01:10.034 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : insert new default crush rule into daemon to prevent restart] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:45", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.160) 0:01:10.195 ********* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : add new default crush rule to ceph.conf] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/crush_rules.yml:54", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.080) 0:01:10.275 ********* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : get default value for osd_pool_default_pg_num] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:5", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.054) 0:01:10.329 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with pool_default_pg_num (backward compatibility)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:16", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.049) 0:01:10.378 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num with default_pool_default_pg_num.stdout] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:21", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.051) 0:01:10.430 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : set_fact osd_pool_default_pg_num ceph_conf_overrides.global.osd_pool_default_pg_num] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/set_osd_pool_default_pg_num.yml:27", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.045) 0:01:10.475 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"osd_pool_default_pg_num\": \"32\"}, \"changed\": false}", "", "TASK [ceph-mon : test if calamari-server is installed] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:2", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.076) 0:01:10.552 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : increase calamari logging level when debug is on] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:18", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.044) 0:01:10.597 ********* ", "skipping: [controller-0] => (item=cthulhu) => {\"changed\": false, \"item\": \"cthulhu\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=calamari_web) => {\"changed\": false, \"item\": \"calamari_web\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mon : initialize the calamari server api] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/calamari.yml:29", "Monday 20 August 2018 06:30:29 -0400 (0:00:00.053) 0:01:10.651 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Monday 20 August 2018 06:30:30 -0400 (0:00:00.017) 0:01:10.668 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Monday 20 August 2018 06:30:30 -0400 (0:00:00.072) 0:01:10.741 ********* ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"83f7af8323e264039a95f266faedb4a665c8f4ca\", \"dest\": \"/tmp/restart_mon_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"a72fe8d7f7ff92960aa2e96a1b3fe152\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 1398, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761030.16-39284618963640/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Monday 20 August 2018 06:30:30 -0400 (0:00:00.519) 0:01:11.260 ********* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Monday 20 August 2018 06:30:30 -0400 (0:00:00.094) 0:01:11.355 ********* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Monday 20 August 2018 06:30:30 -0400 (0:00:00.143) 0:01:11.499 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mon_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Monday 20 August 2018 06:30:30 -0400 (0:00:00.071) 0:01:11.570 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Monday 20 August 2018 06:30:30 -0400 (0:00:00.069) 0:01:11.640 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.051) 0:01:11.691 ********* ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.089) 0:01:11.780 ********* ", "skipping: [controller-0] => (item=ceph-0) => {\"changed\": false, \"item\": \"ceph-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.081) 0:01:11.862 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_osd_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.072) 0:01:11.934 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.071) 0:01:12.005 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.043) 0:01:12.048 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.050) 0:01:12.099 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.055) 0:01:12.154 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mds_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.071) 0:01:12.226 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.065) 0:01:12.292 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.044) 0:01:12.337 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.052) 0:01:12.389 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.053) 0:01:12.443 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_rgw_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.072) 0:01:12.516 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.075) 0:01:12.591 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Monday 20 August 2018 06:30:31 -0400 (0:00:00.049) 0:01:12.640 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Monday 20 August 2018 06:30:32 -0400 (0:00:00.059) 0:01:12.700 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Monday 20 August 2018 06:30:32 -0400 (0:00:00.056) 0:01:12.757 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_rbdmirror_handler_called\": false}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Monday 20 August 2018 06:30:32 -0400 (0:00:00.076) 0:01:12.833 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Monday 20 August 2018 06:30:32 -0400 (0:00:00.076) 0:01:12.909 ********* ", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"73c8d33ad2b3c95d77ee4b411e06cae6\", \"mode\": \"0750\", \"owner\": \"root\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761032.33-165568647489462/source\", \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Monday 20 August 2018 06:30:32 -0400 (0:00:00.499) 0:01:13.409 ********* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Monday 20 August 2018 06:30:32 -0400 (0:00:00.086) 0:01:13.495 ********* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Monday 20 August 2018 06:30:32 -0400 (0:00:00.127) 0:01:13.623 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mons] ********************************************************************", "META: ran handlers", "", "TASK [set ceph monitor install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:98", "Monday 20 August 2018 06:30:33 -0400 (0:00:00.108) 0:01:13.731 ********* ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mon\": {\"end\": \"20180820063033Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "META: ran handlers", "", "PLAY [mgrs] ********************************************************************", "", "TASK [set ceph manager install 'In Progress'] **********************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:110", "Monday 20 August 2018 06:30:33 -0400 (0:00:00.154) 0:01:13.886 ********* ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"start\": \"20180820063033Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Monday 20 August 2018 06:30:33 -0400 (0:00:00.095) 0:01:13.982 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.026595\", \"end\": \"2018-08-20 10:30:33.537723\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:33.511128\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"e09152a9bfe0\", \"stdout_lines\": [\"e09152a9bfe0\"]}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Monday 20 August 2018 06:30:33 -0400 (0:00:00.259) 0:01:14.242 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Monday 20 August 2018 06:30:33 -0400 (0:00:00.049) 0:01:14.291 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Monday 20 August 2018 06:30:33 -0400 (0:00:00.051) 0:01:14.342 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Monday 20 August 2018 06:30:33 -0400 (0:00:00.049) 0:01:14.392 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mgr-controller-0\"], \"delta\": \"0:00:00.023186\", \"end\": \"2018-08-20 10:30:34.048709\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:34.025523\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.361) 0:01:14.753 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.048) 0:01:14.802 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.047) 0:01:14.850 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.045) 0:01:14.895 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.051) 0:01:14.946 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.192) 0:01:15.139 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.055) 0:01:15.195 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.050) 0:01:15.246 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.051) 0:01:15.297 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.343 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.390 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.045) 0:01:15.436 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.049) 0:01:15.486 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.533 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.579 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Monday 20 August 2018 06:30:34 -0400 (0:00:00.046) 0:01:15.625 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.060) 0:01:15.686 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.051) 0:01:15.737 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.048) 0:01:15.785 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.048) 0:01:15.833 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.045) 0:01:15.879 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.049) 0:01:15.928 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.050) 0:01:15.978 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.048) 0:01:16.026 ********* ", "ok: [controller-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.224) 0:01:16.251 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.085) 0:01:16.337 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.091) 0:01:16.429 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Monday 20 August 2018 06:30:35 -0400 (0:00:00.077) 0:01:16.506 ********* ", "ok: [controller-0 -> 192.168.24.12] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Monday 20 August 2018 06:30:36 -0400 (0:00:00.161) 0:01:16.668 ********* ", "ok: [controller-0 -> 192.168.24.12] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.366133\", \"end\": \"2018-08-20 10:30:36.583141\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:36.217008\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"00d03b50-a460-11e8-8cf1-525400721501\", \"stdout_lines\": [\"00d03b50-a460-11e8-8cf1-525400721501\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Monday 20 August 2018 06:30:36 -0400 (0:00:00.626) 0:01:17.295 ********* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Monday 20 August 2018 06:30:36 -0400 (0:00:00.187) 0:01:17.483 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Monday 20 August 2018 06:30:36 -0400 (0:00:00.050) 0:01:17.533 ********* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 50, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Monday 20 August 2018 06:30:37 -0400 (0:00:00.177) 0:01:17.711 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"fsid\": \"00d03b50-a460-11e8-8cf1-525400721501\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Monday 20 August 2018 06:30:37 -0400 (0:00:00.080) 0:01:17.791 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Monday 20 August 2018 06:30:37 -0400 (0:00:00.079) 0:01:17.871 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Monday 20 August 2018 06:30:37 -0400 (0:00:00.045) 0:01:17.917 ********* ", "changed: [controller-0 -> localhost] => {\"changed\": true, \"cmd\": \"echo 00d03b50-a460-11e8-8cf1-525400721501 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"delta\": \"0:00:00.581384\", \"end\": \"2018-08-20 06:30:37.985514\", \"rc\": 0, \"start\": \"2018-08-20 06:30:37.404130\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"00d03b50-a460-11e8-8cf1-525400721501\", \"stdout_lines\": [\"00d03b50-a460-11e8-8cf1-525400721501\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.781) 0:01:18.698 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.052) 0:01:18.750 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.047) 0:01:18.798 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"mds_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.085) 0:01:18.883 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.043) 0:01:18.927 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.053) 0:01:18.980 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.048) 0:01:19.029 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.055) 0:01:19.084 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.049) 0:01:19.134 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.049) 0:01:19.184 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.047) 0:01:19.231 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.042) 0:01:19.274 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.048) 0:01:19.322 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.049) 0:01:19.372 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.050) 0:01:19.422 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : get current cluster status (if already running)] *********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:219", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.075) 0:01:19.498 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:223", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.053) 0:01:19.552 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:227", "Monday 20 August 2018 06:30:38 -0400 (0:00:00.051) 0:01:19.603 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:237", "Monday 20 August 2018 06:30:39 -0400 (0:00:00.062) 0:01:19.666 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Monday 20 August 2018 06:30:39 -0400 (0:00:00.049) 0:01:19.716 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Monday 20 August 2018 06:30:39 -0400 (0:00:00.224) 0:01:19.940 ********* ", "ok: [controller-0] => (item=/etc/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 160, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mon) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 31, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/tmp) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 28, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 35, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 167}", "ok: [controller-0] => (item=/var/run/ceph) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 60, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Monday 20 August 2018 06:30:41 -0400 (0:00:02.125) 0:01:22.066 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Monday 20 August 2018 06:30:41 -0400 (0:00:00.050) 0:01:22.116 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Monday 20 August 2018 06:30:41 -0400 (0:00:00.061) 0:01:22.178 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Monday 20 August 2018 06:30:41 -0400 (0:00:00.053) 0:01:22.232 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Monday 20 August 2018 06:30:41 -0400 (0:00:00.047) 0:01:22.280 ********* ", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [controller-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.383) 0:01:22.664 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"monitor_name\": \"controller-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.083) 0:01:22.748 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.045) 0:01:22.793 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.022056\", \"end\": \"2018-08-20 10:30:42.346582\", \"rc\": 0, \"start\": \"2018-08-20 10:30:42.324526\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.257) 0:01:23.050 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.080) 0:01:23.131 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-controller-0\"], \"delta\": \"0:00:00.022201\", \"end\": \"2018-08-20 10:30:42.678900\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:42.656699\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"e09152a9bfe0\", \"stdout_lines\": [\"e09152a9bfe0\"]}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.253) 0:01:23.384 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.058) 0:01:23.442 ********* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.063) 0:01:23.505 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.054) 0:01:23.560 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Monday 20 August 2018 06:30:42 -0400 (0:00:00.068) 0:01:23.629 ********* ", "skipping: [controller-0] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.121) 0:01:23.751 ********* ", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.client.admin.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/etc/ceph/ceph.mon.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [controller-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'skipped': True, '_ansible_no_log': False, 'skip_reason': u'Conditional result was False', '_ansible_item_result': True, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', 'changed': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"changed\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"skip_reason\": \"Conditional result was False\", \"skipped\": true}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.130) 0:01:23.882 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.049) 0:01:23.931 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.048) 0:01:23.980 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.052) 0:01:24.032 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.055) 0:01:24.087 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.053) 0:01:24.141 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.047) 0:01:24.188 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.045) 0:01:24.234 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.050) 0:01:24.284 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"e09152a9bfe0\"], \"delta\": \"0:00:00.021511\", \"end\": \"2018-08-20 10:30:43.851369\", \"rc\": 0, \"start\": \"2018-08-20 10:30:43.829858\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460\\\",\\n \\\"Created\\\": \\\"2018-08-20T10:30:09.142796973Z\\\",\\n \\\"Path\\\": \\\"/entrypoint.sh\\\",\\n \\\"Args\\\": [],\\n \\\"State\\\": {\\n \\\"Status\\\": \\\"running\\\",\\n \\\"Running\\\": true,\\n \\\"Paused\\\": false,\\n \\\"Restarting\\\": false,\\n \\\"OOMKilled\\\": false,\\n \\\"Dead\\\": false,\\n \\\"Pid\\\": 44599,\\n \\\"ExitCode\\\": 0,\\n \\\"Error\\\": \\\"\\\",\\n \\\"StartedAt\\\": \\\"2018-08-20T10:30:09.30892439Z\\\",\\n \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\\n },\\n \\\"Image\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/resolv.conf\\\",\\n \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/hostname\\\",\\n \\\"HostsPath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/hosts\\\",\\n \\\"LogPath\\\": \\\"\\\",\\n \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\\n \\\"RestartCount\\\": 0,\\n \\\"Driver\\\": \\\"overlay2\\\",\\n \\\"MountLabel\\\": \\\"\\\",\\n \\\"ProcessLabel\\\": \\\"\\\",\\n \\\"AppArmorProfile\\\": \\\"\\\",\\n \\\"ExecIDs\\\": null,\\n \\\"HostConfig\\\": {\\n \\\"Binds\\\": [\\n \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\\n \\\"/etc/ceph:/etc/ceph:z\\\",\\n \\\"/var/run/ceph:/var/run/ceph:z\\\",\\n \\\"/etc/localtime:/etc/localtime:ro\\\"\\n ],\\n \\\"ContainerIDFile\\\": \\\"\\\",\\n \\\"LogConfig\\\": {\\n \\\"Type\\\": \\\"journald\\\",\\n \\\"Config\\\": {}\\n },\\n \\\"NetworkMode\\\": \\\"host\\\",\\n \\\"PortBindings\\\": {},\\n \\\"RestartPolicy\\\": {\\n \\\"Name\\\": \\\"no\\\",\\n \\\"MaximumRetryCount\\\": 0\\n },\\n \\\"AutoRemove\\\": true,\\n \\\"VolumeDriver\\\": \\\"\\\",\\n \\\"VolumesFrom\\\": null,\\n \\\"CapAdd\\\": null,\\n \\\"CapDrop\\\": null,\\n \\\"Dns\\\": [],\\n \\\"DnsOptions\\\": [],\\n \\\"DnsSearch\\\": [],\\n \\\"ExtraHosts\\\": null,\\n \\\"GroupAdd\\\": null,\\n \\\"IpcMode\\\": \\\"\\\",\\n \\\"Cgroup\\\": \\\"\\\",\\n \\\"Links\\\": null,\\n \\\"OomScoreAdj\\\": 0,\\n \\\"PidMode\\\": \\\"\\\",\\n \\\"Privileged\\\": false,\\n \\\"PublishAllPorts\\\": false,\\n \\\"ReadonlyRootfs\\\": false,\\n \\\"SecurityOpt\\\": null,\\n \\\"UTSMode\\\": \\\"\\\",\\n \\\"UsernsMode\\\": \\\"\\\",\\n \\\"ShmSize\\\": 67108864,\\n \\\"Runtime\\\": \\\"docker-runc\\\",\\n \\\"ConsoleSize\\\": [\\n 0,\\n 0\\n ],\\n \\\"Isolation\\\": \\\"\\\",\\n \\\"CpuShares\\\": 0,\\n \\\"Memory\\\": 3221225472,\\n \\\"NanoCpus\\\": 0,\\n \\\"CgroupParent\\\": \\\"\\\",\\n \\\"BlkioWeight\\\": 0,\\n \\\"BlkioWeightDevice\\\": null,\\n \\\"BlkioDeviceReadBps\\\": null,\\n \\\"BlkioDeviceWriteBps\\\": null,\\n \\\"BlkioDeviceReadIOps\\\": null,\\n \\\"BlkioDeviceWriteIOps\\\": null,\\n \\\"CpuPeriod\\\": 0,\\n \\\"CpuQuota\\\": 100000,\\n \\\"CpuRealtimePeriod\\\": 0,\\n \\\"CpuRealtimeRuntime\\\": 0,\\n \\\"CpusetCpus\\\": \\\"\\\",\\n \\\"CpusetMems\\\": \\\"\\\",\\n \\\"Devices\\\": [],\\n \\\"DiskQuota\\\": 0,\\n \\\"KernelMemory\\\": 0,\\n \\\"MemoryReservation\\\": 0,\\n \\\"MemorySwap\\\": 6442450944,\\n \\\"MemorySwappiness\\\": -1,\\n \\\"OomKillDisable\\\": false,\\n \\\"PidsLimit\\\": 0,\\n \\\"Ulimits\\\": null,\\n \\\"CpuCount\\\": 0,\\n \\\"CpuPercent\\\": 0,\\n \\\"IOMaximumIOps\\\": 0,\\n \\\"IOMaximumBandwidth\\\": 0\\n },\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9-init/diff:/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff:/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/work\\\"\\n }\\n },\\n \\\"Mounts\\\": [\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/lib/ceph\\\",\\n \\\"Destination\\\": \\\"/var/lib/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/ceph\\\",\\n \\\"Destination\\\": \\\"/etc/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/var/run/ceph\\\",\\n \\\"Destination\\\": \\\"/var/run/ceph\\\",\\n \\\"Mode\\\": \\\"z\\\",\\n \\\"RW\\\": true,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n },\\n {\\n \\\"Type\\\": \\\"bind\\\",\\n \\\"Source\\\": \\\"/etc/localtime\\\",\\n \\\"Destination\\\": \\\"/etc/localtime\\\",\\n \\\"Mode\\\": \\\"ro\\\",\\n \\\"RW\\\": false,\\n \\\"Propagation\\\": \\\"rprivate\\\"\\n }\\n ],\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"controller-0\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": true,\\n \\\"AttachStderr\\\": true,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"IP_VERSION=4\\\",\\n \\\"MON_IP=172.17.3.14\\\",\\n \\\"CLUSTER=ceph\\\",\\n \\\"FSID=00d03b50-a460-11e8-8cf1-525400721501\\\",\\n \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\\n \\\"CEPH_DAEMON=MON\\\",\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-11\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": null,\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"NetworkSettings\\\": {\\n \\\"Bridge\\\": \\\"\\\",\\n \\\"SandboxID\\\": \\\"0278a3b0888e406c316ccc3b14210d2a79ce05281a17141a4255a1dcb51f4d88\\\",\\n \\\"HairpinMode\\\": false,\\n \\\"LinkLocalIPv6Address\\\": \\\"\\\",\\n \\\"LinkLocalIPv6PrefixLen\\\": 0,\\n \\\"Ports\\\": {},\\n \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\\n \\\"SecondaryIPAddresses\\\": null,\\n \\\"SecondaryIPv6Addresses\\\": null,\\n \\\"EndpointID\\\": \\\"\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"MacAddress\\\": \\\"\\\",\\n \\\"Networks\\\": {\\n \\\"host\\\": {\\n \\\"IPAMConfig\\\": null,\\n \\\"Links\\\": null,\\n \\\"Aliases\\\": null,\\n \\\"NetworkID\\\": \\\"77a481d4bde7dbe2de1254b4f8439d7ef986772190569fe08b0d4650df1853b3\\\",\\n \\\"EndpointID\\\": \\\"50d85912eb42c78e21a711c557cef5ab5974dde83cbe2a0ea94a839b52e6367b\\\",\\n \\\"Gateway\\\": \\\"\\\",\\n \\\"IPAddress\\\": \\\"\\\",\\n \\\"IPPrefixLen\\\": 0,\\n \\\"IPv6Gateway\\\": \\\"\\\",\\n \\\"GlobalIPv6Address\\\": \\\"\\\",\\n \\\"GlobalIPv6PrefixLen\\\": 0,\\n \\\"MacAddress\\\": \\\"\\\"\\n }\\n }\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460\\\",\", \" \\\"Created\\\": \\\"2018-08-20T10:30:09.142796973Z\\\",\", \" \\\"Path\\\": \\\"/entrypoint.sh\\\",\", \" \\\"Args\\\": [],\", \" \\\"State\\\": {\", \" \\\"Status\\\": \\\"running\\\",\", \" \\\"Running\\\": true,\", \" \\\"Paused\\\": false,\", \" \\\"Restarting\\\": false,\", \" \\\"OOMKilled\\\": false,\", \" \\\"Dead\\\": false,\", \" \\\"Pid\\\": 44599,\", \" \\\"ExitCode\\\": 0,\", \" \\\"Error\\\": \\\"\\\",\", \" \\\"StartedAt\\\": \\\"2018-08-20T10:30:09.30892439Z\\\",\", \" \\\"FinishedAt\\\": \\\"0001-01-01T00:00:00Z\\\"\", \" },\", \" \\\"Image\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"ResolvConfPath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/resolv.conf\\\",\", \" \\\"HostnamePath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/hostname\\\",\", \" \\\"HostsPath\\\": \\\"/var/lib/docker/containers/e09152a9bfe0f554adf5ab1d26779b158b6ea4f52ec2ea33b2517b5f5ea15460/hosts\\\",\", \" \\\"LogPath\\\": \\\"\\\",\", \" \\\"Name\\\": \\\"/ceph-mon-controller-0\\\",\", \" \\\"RestartCount\\\": 0,\", \" \\\"Driver\\\": \\\"overlay2\\\",\", \" \\\"MountLabel\\\": \\\"\\\",\", \" \\\"ProcessLabel\\\": \\\"\\\",\", \" \\\"AppArmorProfile\\\": \\\"\\\",\", \" \\\"ExecIDs\\\": null,\", \" \\\"HostConfig\\\": {\", \" \\\"Binds\\\": [\", \" \\\"/var/lib/ceph:/var/lib/ceph:z\\\",\", \" \\\"/etc/ceph:/etc/ceph:z\\\",\", \" \\\"/var/run/ceph:/var/run/ceph:z\\\",\", \" \\\"/etc/localtime:/etc/localtime:ro\\\"\", \" ],\", \" \\\"ContainerIDFile\\\": \\\"\\\",\", \" \\\"LogConfig\\\": {\", \" \\\"Type\\\": \\\"journald\\\",\", \" \\\"Config\\\": {}\", \" },\", \" \\\"NetworkMode\\\": \\\"host\\\",\", \" \\\"PortBindings\\\": {},\", \" \\\"RestartPolicy\\\": {\", \" \\\"Name\\\": \\\"no\\\",\", \" \\\"MaximumRetryCount\\\": 0\", \" },\", \" \\\"AutoRemove\\\": true,\", \" \\\"VolumeDriver\\\": \\\"\\\",\", \" \\\"VolumesFrom\\\": null,\", \" \\\"CapAdd\\\": null,\", \" \\\"CapDrop\\\": null,\", \" \\\"Dns\\\": [],\", \" \\\"DnsOptions\\\": [],\", \" \\\"DnsSearch\\\": [],\", \" \\\"ExtraHosts\\\": null,\", \" \\\"GroupAdd\\\": null,\", \" \\\"IpcMode\\\": \\\"\\\",\", \" \\\"Cgroup\\\": \\\"\\\",\", \" \\\"Links\\\": null,\", \" \\\"OomScoreAdj\\\": 0,\", \" \\\"PidMode\\\": \\\"\\\",\", \" \\\"Privileged\\\": false,\", \" \\\"PublishAllPorts\\\": false,\", \" \\\"ReadonlyRootfs\\\": false,\", \" \\\"SecurityOpt\\\": null,\", \" \\\"UTSMode\\\": \\\"\\\",\", \" \\\"UsernsMode\\\": \\\"\\\",\", \" \\\"ShmSize\\\": 67108864,\", \" \\\"Runtime\\\": \\\"docker-runc\\\",\", \" \\\"ConsoleSize\\\": [\", \" 0,\", \" 0\", \" ],\", \" \\\"Isolation\\\": \\\"\\\",\", \" \\\"CpuShares\\\": 0,\", \" \\\"Memory\\\": 3221225472,\", \" \\\"NanoCpus\\\": 0,\", \" \\\"CgroupParent\\\": \\\"\\\",\", \" \\\"BlkioWeight\\\": 0,\", \" \\\"BlkioWeightDevice\\\": null,\", \" \\\"BlkioDeviceReadBps\\\": null,\", \" \\\"BlkioDeviceWriteBps\\\": null,\", \" \\\"BlkioDeviceReadIOps\\\": null,\", \" \\\"BlkioDeviceWriteIOps\\\": null,\", \" \\\"CpuPeriod\\\": 0,\", \" \\\"CpuQuota\\\": 100000,\", \" \\\"CpuRealtimePeriod\\\": 0,\", \" \\\"CpuRealtimeRuntime\\\": 0,\", \" \\\"CpusetCpus\\\": \\\"\\\",\", \" \\\"CpusetMems\\\": \\\"\\\",\", \" \\\"Devices\\\": [],\", \" \\\"DiskQuota\\\": 0,\", \" \\\"KernelMemory\\\": 0,\", \" \\\"MemoryReservation\\\": 0,\", \" \\\"MemorySwap\\\": 6442450944,\", \" \\\"MemorySwappiness\\\": -1,\", \" \\\"OomKillDisable\\\": false,\", \" \\\"PidsLimit\\\": 0,\", \" \\\"Ulimits\\\": null,\", \" \\\"CpuCount\\\": 0,\", \" \\\"CpuPercent\\\": 0,\", \" \\\"IOMaximumIOps\\\": 0,\", \" \\\"IOMaximumBandwidth\\\": 0\", \" },\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9-init/diff:/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff:/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/472a7191555036a8004ebe33cee4260bf92fe6bc7d72ebb41c716cc3b04d88e9/work\\\"\", \" }\", \" },\", \" \\\"Mounts\\\": [\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/lib/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/ceph\\\",\", \" \\\"Destination\\\": \\\"/etc/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/var/run/ceph\\\",\", \" \\\"Destination\\\": \\\"/var/run/ceph\\\",\", \" \\\"Mode\\\": \\\"z\\\",\", \" \\\"RW\\\": true,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" },\", \" {\", \" \\\"Type\\\": \\\"bind\\\",\", \" \\\"Source\\\": \\\"/etc/localtime\\\",\", \" \\\"Destination\\\": \\\"/etc/localtime\\\",\", \" \\\"Mode\\\": \\\"ro\\\",\", \" \\\"RW\\\": false,\", \" \\\"Propagation\\\": \\\"rprivate\\\"\", \" }\", \" ],\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"controller-0\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": true,\", \" \\\"AttachStderr\\\": true,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"IP_VERSION=4\\\",\", \" \\\"MON_IP=172.17.3.14\\\",\", \" \\\"CLUSTER=ceph\\\",\", \" \\\"FSID=00d03b50-a460-11e8-8cf1-525400721501\\\",\", \" \\\"CEPH_PUBLIC_NETWORK=172.17.3.0/24\\\",\", \" \\\"CEPH_DAEMON=MON\\\",\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"192.168.24.1:8787/rhceph:3-11\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": null,\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"NetworkSettings\\\": {\", \" \\\"Bridge\\\": \\\"\\\",\", \" \\\"SandboxID\\\": \\\"0278a3b0888e406c316ccc3b14210d2a79ce05281a17141a4255a1dcb51f4d88\\\",\", \" \\\"HairpinMode\\\": false,\", \" \\\"LinkLocalIPv6Address\\\": \\\"\\\",\", \" \\\"LinkLocalIPv6PrefixLen\\\": 0,\", \" \\\"Ports\\\": {},\", \" \\\"SandboxKey\\\": \\\"/var/run/docker/netns/default\\\",\", \" \\\"SecondaryIPAddresses\\\": null,\", \" \\\"SecondaryIPv6Addresses\\\": null,\", \" \\\"EndpointID\\\": \\\"\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"MacAddress\\\": \\\"\\\",\", \" \\\"Networks\\\": {\", \" \\\"host\\\": {\", \" \\\"IPAMConfig\\\": null,\", \" \\\"Links\\\": null,\", \" \\\"Aliases\\\": null,\", \" \\\"NetworkID\\\": \\\"77a481d4bde7dbe2de1254b4f8439d7ef986772190569fe08b0d4650df1853b3\\\",\", \" \\\"EndpointID\\\": \\\"50d85912eb42c78e21a711c557cef5ab5974dde83cbe2a0ea94a839b52e6367b\\\",\", \" \\\"Gateway\\\": \\\"\\\",\", \" \\\"IPAddress\\\": \\\"\\\",\", \" \\\"IPPrefixLen\\\": 0,\", \" \\\"IPv6Gateway\\\": \\\"\\\",\", \" \\\"GlobalIPv6Address\\\": \\\"\\\",\", \" \\\"GlobalIPv6PrefixLen\\\": 0,\", \" \\\"MacAddress\\\": \\\"\\\"\", \" }\", \" }\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.290) 0:01:24.575 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Monday 20 August 2018 06:30:43 -0400 (0:00:00.053) 0:01:24.628 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.053) 0:01:24.682 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.054) 0:01:24.737 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.077) 0:01:24.815 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.055) 0:01:24.870 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.049) 0:01:24.920 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"inspect\", \"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\"], \"delta\": \"0:00:00.027415\", \"end\": \"2018-08-20 10:30:44.478818\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:44.451403\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-11\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 616048717,\\n \\\"VirtualSize\\\": 616048717,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\\n \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\\n \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-11\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 616048717,\", \" \\\"VirtualSize\\\": 616048717,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\", \" \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\", \" \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.282) 0:01:25.203 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.049) 0:01:25.252 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.052) 0:01:25.304 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.048) 0:01:25.353 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.052) 0:01:25.406 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.047) 0:01:25.453 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.049) 0:01:25.502 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_mon_image_repodigest_before_pulling\": \"sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.087) 0:01:25.590 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Monday 20 August 2018 06:30:44 -0400 (0:00:00.049) 0:01:25.639 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.047) 0:01:25.687 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.050) 0:01:25.738 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.048) 0:01:25.786 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.048) 0:01:25.835 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-11 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.049) 0:01:25.884 ********* ", "ok: [controller-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:00.035260\", \"end\": \"2018-08-20 10:30:45.441486\", \"rc\": 0, \"start\": \"2018-08-20 10:30:45.406226\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-11: Pulling from 192.168.24.1:8787/rhceph\\nDigest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\nStatus: Image is up to date for 192.168.24.1:8787/rhceph:3-11\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-11: Pulling from 192.168.24.1:8787/rhceph\", \"Digest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\", \"Status: Image is up to date for 192.168.24.1:8787/rhceph:3-11\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-11 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.265) 0:01:26.150 ********* ", "changed: [controller-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:00.024950\", \"end\": \"2018-08-20 10:30:45.717487\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:45.692537\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-11\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 616048717,\\n \\\"VirtualSize\\\": 616048717,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\\n \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\\n \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-11\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 616048717,\", \" \\\"VirtualSize\\\": 616048717,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/40a9733d4f3b4a6669f49b30e3d8d81ad85ca85964e3c8280dbb38c50336d95a/diff:/var/lib/docker/overlay2/947970a2d98377672bef065571ea64f2071011fde99051597975e0e2b9c4baf8/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/8e629685581c3fcd242d32672a8bc4f8e97070c9663b859f61c181d0a38b8b0d/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\", \" \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\", \" \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.285) 0:01:26.435 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.077) 0:01:26.512 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.053) 0:01:26.566 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Monday 20 August 2018 06:30:45 -0400 (0:00:00.045) 0:01:26.611 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Monday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:26.657 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Monday 20 August 2018 06:30:46 -0400 (0:00:00.044) 0:01:26.701 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Monday 20 August 2018 06:30:46 -0400 (0:00:00.048) 0:01:26.750 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Monday 20 August 2018 06:30:46 -0400 (0:00:00.044) 0:01:26.795 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Monday 20 August 2018 06:30:46 -0400 (0:00:00.059) 0:01:26.854 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Monday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:26.900 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Monday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:26.946 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Monday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:26.991 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Monday 20 August 2018 06:30:46 -0400 (0:00:00.045) 0:01:27.036 ********* ", "ok: [controller-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-11\", \"--version\"], \"delta\": \"0:00:00.450550\", \"end\": \"2018-08-20 10:30:47.118123\", \"rc\": 0, \"start\": \"2018-08-20 10:30:46.667573\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Monday 20 August 2018 06:30:47 -0400 (0:00:00.784) 0:01:27.821 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-30.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Monday 20 August 2018 06:30:47 -0400 (0:00:00.203) 0:01:28.024 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Monday 20 August 2018 06:30:47 -0400 (0:00:00.051) 0:01:28.076 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Monday 20 August 2018 06:30:47 -0400 (0:00:00.050) 0:01:28.126 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Monday 20 August 2018 06:30:47 -0400 (0:00:00.183) 0:01:28.309 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Monday 20 August 2018 06:30:47 -0400 (0:00:00.146) 0:01:28.456 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Monday 20 August 2018 06:30:47 -0400 (0:00:00.043) 0:01:28.499 ********* ", "changed: [controller-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "changed: [controller-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 26, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Monday 20 August 2018 06:30:48 -0400 (0:00:00.849) 0:01:29.349 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Monday 20 August 2018 06:30:48 -0400 (0:00:00.065) 0:01:29.415 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Monday 20 August 2018 06:30:48 -0400 (0:00:00.059) 0:01:29.474 ********* ", "ok: [controller-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Monday 20 August 2018 06:30:49 -0400 (0:00:00.215) 0:01:29.690 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Monday 20 August 2018 06:30:49 -0400 (0:00:00.057) 0:01:29.747 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Monday 20 August 2018 06:30:49 -0400 (0:00:00.050) 0:01:29.798 ********* ", "changed: [controller-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 117, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Monday 20 August 2018 06:30:49 -0400 (0:00:00.252) 0:01:30.051 ********* ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"ad274129acdf99bf79681112519249b5cd433cfc\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"d12c4a40219f2d53aebea240077fc57d\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1103, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761049.45-192564754238695/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Monday 20 August 2018 06:30:49 -0400 (0:00:00.564) 0:01:30.615 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set_fact docker_exec_cmd] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:2", "Monday 20 August 2018 06:30:50 -0400 (0:00:00.062) 0:01:30.677 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"docker_exec_cmd_mgr\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-mgr : create mgr directory] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:2", "Monday 20 August 2018 06:30:50 -0400 (0:00:00.117) 0:01:30.794 ********* ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:10", "Monday 20 August 2018 06:30:50 -0400 (0:00:00.244) 0:01:31.039 ********* ", "changed: [controller-0] => (item={u'dest': u'/var/lib/ceph/mgr/ceph-controller-0/keyring', u'name': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"dest\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"name\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"md5sum\": \"82352f6d3d5aac744c3838aa345e1f7c\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761050.44-264880339939330/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [controller-0] => (item={u'dest': u'/etc/ceph/ceph.client.admin.keyring', u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"dest\": \"/etc/ceph/ceph.client.admin.keyring\", \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : set mgr key permissions] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/common.yml:24", "Monday 20 August 2018 06:30:50 -0400 (0:00:00.554) 0:01:31.593 ********* ", "ok: [controller-0] => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"mode\": \"0600\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mgr/ceph-controller-0/keyring\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 67, \"state\": \"file\", \"uid\": 167}", "", "TASK [ceph-mgr : install ceph-mgr package on RedHat or SUSE] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2", "Monday 20 August 2018 06:30:51 -0400 (0:00:00.244) 0:01:31.838 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : install ceph mgr for debian] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:9", "Monday 20 August 2018 06:30:51 -0400 (0:00:00.061) 0:01:31.899 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : ensure systemd service override directory exists] *************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:17", "Monday 20 August 2018 06:30:51 -0400 (0:00:00.073) 0:01:31.973 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add ceph-mgr systemd service overrides] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:25", "Monday 20 August 2018 06:30:51 -0400 (0:00:00.057) 0:01:32.031 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : start and add that the mgr service to the init sequence] ******", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:35", "Monday 20 August 2018 06:30:51 -0400 (0:00:00.086) 0:01:32.118 ********* ", "skipping: [controller-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:2", "Monday 20 August 2018 06:30:51 -0400 (0:00:00.053) 0:01:32.171 ********* ", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for controller-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for controller-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for controller-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for controller-0", "changed: [controller-0] => {\"changed\": true, \"checksum\": \"999f6cead45dab5c24bf2b8115beaf5b3c3389b5\", \"dest\": \"/etc/systemd/system/ceph-mgr@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"887c4695cb992b04476c1f085621325e\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 734, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761051.57-35019727649894/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-mgr : systemd start mgr container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/docker/start_docker_mgr.yml:13", "Monday 20 August 2018 06:30:52 -0400 (0:00:00.824) 0:01:32.995 ********* ", "changed: [controller-0] => {\"changed\": true, \"enabled\": true, \"name\": \"ceph-mgr@controller-0\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dmgr.slice basic.target docker.service systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph Manager\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker run --rm --net=host --memory=1g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro -e CLUSTER=ceph -e CEPH_DAEMON=MGR -e MGR_DASHBOARD=0 --name=ceph-mgr-controller-0 192.168.24.1:8787/rhceph:3-11 ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStopPost\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-mgr-controller-0 ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-mgr@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-mgr@controller-0.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"127799\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"127799\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-mgr@controller-0.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dmgr.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dmgr.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-mgr : get enabled modules from ceph-mgr] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:19", "Monday 20 August 2018 06:30:52 -0400 (0:00:00.525) 0:01:33.521 ********* ", "changed: [controller-0 -> 192.168.24.12] => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"--format\", \"json\", \"mgr\", \"module\", \"ls\"], \"delta\": \"0:00:00.345559\", \"end\": \"2018-08-20 10:30:53.434508\", \"rc\": 0, \"start\": \"2018-08-20 10:30:53.088949\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\\n{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\", \"stdout_lines\": [\"\", \"{\\\"enabled_modules\\\":[\\\"balancer\\\",\\\"restful\\\",\\\"status\\\"],\\\"disabled_modules\\\":[\\\"dashboard\\\",\\\"influx\\\",\\\"localpool\\\",\\\"prometheus\\\",\\\"selftest\\\",\\\"zabbix\\\"]}\"]}", "", "TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] ***", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:26", "Monday 20 August 2018 06:30:53 -0400 (0:00:00.617) 0:01:34.139 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_ceph_mgr_modules\": {\"disabled_modules\": [\"dashboard\", \"influx\", \"localpool\", \"prometheus\", \"selftest\", \"zabbix\"], \"enabled_modules\": [\"balancer\", \"restful\", \"status\"]}}, \"changed\": false}", "", "TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:32", "Monday 20 August 2018 06:30:53 -0400 (0:00:00.086) 0:01:34.226 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_disabled_ceph_mgr_modules\": \"[Undefined, Undefined, Undefined, Undefined, Undefined, Undefined]\"}, \"changed\": false}", "", "TASK [ceph-mgr : disable ceph mgr enabled modules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:38", "Monday 20 August 2018 06:30:53 -0400 (0:00:00.108) 0:01:34.334 ********* ", "changed: [controller-0 -> 192.168.24.12] => (item=balancer) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"balancer\"], \"delta\": \"0:00:01.324554\", \"end\": \"2018-08-20 10:30:55.205683\", \"item\": \"balancer\", \"rc\": 0, \"start\": \"2018-08-20 10:30:53.881129\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [controller-0 -> 192.168.24.12] => (item=restful) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"mgr\", \"module\", \"disable\", \"restful\"], \"delta\": \"0:00:00.832155\", \"end\": \"2018-08-20 10:30:56.208878\", \"item\": \"restful\", \"rc\": 0, \"start\": \"2018-08-20 10:30:55.376723\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-mgr : add modules to ceph-mgr] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-mgr/tasks/main.yml:49", "Monday 20 August 2018 06:30:56 -0400 (0:00:02.621) 0:01:36.955 ********* ", "skipping: [controller-0] => (item=status) => {\"changed\": false, \"item\": \"status\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Monday 20 August 2018 06:30:56 -0400 (0:00:00.035) 0:01:36.991 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": true}, \"changed\": false}", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Monday 20 August 2018 06:30:56 -0400 (0:00:00.186) 0:01:37.177 ********* ", "ok: [controller-0] => {\"changed\": false, \"checksum\": \"3b92c07facdbaa789b36f850d92d7444e2bb6a27\", \"dest\": \"/tmp/restart_mgr_daemon.sh\", \"gid\": 0, \"group\": \"root\", \"mode\": \"0750\", \"owner\": \"root\", \"path\": \"/tmp/restart_mgr_daemon.sh\", \"secontext\": \"unconfined_u:object_r:user_tmp_t:s0\", \"size\": 843, \"state\": \"file\", \"uid\": 0}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Monday 20 August 2018 06:30:57 -0400 (0:00:00.569) 0:01:37.747 ********* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Monday 20 August 2018 06:30:57 -0400 (0:00:00.086) 0:01:37.833 ********* ", "skipping: [controller-0] => (item=controller-0) => {\"changed\": false, \"item\": \"controller-0\", \"skip_reason\": \"Conditional result was False\"}", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Monday 20 August 2018 06:30:57 -0400 (0:00:00.128) 0:01:37.962 ********* ", "ok: [controller-0] => {\"ansible_facts\": {\"_mgr_handler_called\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [set ceph manager install 'Complete'] *************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:129", "Monday 20 August 2018 06:30:57 -0400 (0:00:00.208) 0:01:38.170 ********* ", "ok: [controller-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_mgr\": {\"end\": \"20180820063057Z\", \"status\": \"Complete\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "PLAY [osds] ********************************************************************", "", "TASK [set ceph osd install 'In Progress'] **************************************", "task path: /usr/share/ceph-ansible/site-docker.yml.sample:141", "Monday 20 August 2018 06:30:57 -0400 (0:00:00.331) 0:01:38.501 ********* ", "ok: [ceph-0] => {\"ansible_stats\": {\"aggregate\": true, \"data\": {\"installer_phase_ceph_osd\": {\"start\": \"20180820063057Z\", \"status\": \"In Progress\"}}, \"per_host\": false}, \"changed\": false}", "META: ran handlers", "", "TASK [ceph-defaults : check for a mon container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:2", "Monday 20 August 2018 06:30:57 -0400 (0:00:00.094) 0:01:38.596 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for an osd container] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:11", "Monday 20 August 2018 06:30:57 -0400 (0:00:00.051) 0:01:38.648 ********* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-osd-ceph-0\"], \"delta\": \"0:00:00.030774\", \"end\": \"2018-08-20 10:30:58.213214\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:30:58.182440\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-defaults : check for a mds container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:20", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.271) 0:01:38.919 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rgw container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:29", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.051) 0:01:38.971 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a mgr container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:38", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.051) 0:01:39.022 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a rbd mirror container] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:47", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.047) 0:01:39.070 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a nfs container] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_running_containers.yml:56", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.048) 0:01:39.118 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mon socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:2", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.050) 0:01:39.169 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mon socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:11", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.051) 0:01:39.220 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mon socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:21", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.040) 0:01:39.260 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph osd socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:30", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.040) 0:01:39.300 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph osd socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:40", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.040) 0:01:39.341 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph osd socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:50", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.040) 0:01:39.381 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mds socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:59", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.045) 0:01:39.426 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mds socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:69", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.042) 0:01:39.469 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mds socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:79", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.041) 0:01:39.510 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rgw socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:88", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.039) 0:01:39.550 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rgw socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:98", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.041) 0:01:39.592 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rgw socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:108", "Monday 20 August 2018 06:30:58 -0400 (0:00:00.039) 0:01:39.631 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph mgr socket] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:117", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.043) 0:01:39.675 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph mgr socket is in-use] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:127", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.048) 0:01:39.723 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph mgr socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:137", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.048) 0:01:39.772 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph rbd mirror socket] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:146", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.083) 0:01:39.856 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph rbd mirror socket is in-use] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:156", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.070) 0:01:39.927 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph rbd mirror socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:166", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.053) 0:01:39.980 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check for a ceph nfs ganesha socket] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:175", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.047) 0:01:40.028 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if the ceph nfs ganesha socket is in-use] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:184", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.062) 0:01:40.090 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : remove ceph nfs ganesha socket if exists and not used by a process] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/check_socket_non_container.yml:194", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.054) 0:01:40.145 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : check if it is atomic host] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:2", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.056) 0:01:40.202 ********* ", "ok: [ceph-0] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact is_atomic] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:7", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.199) 0:01:40.401 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"is_atomic\": false}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_hostname] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:11", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.079) 0:01:40.480 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact monitor_name ansible_fqdn] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:17", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.082) 0:01:40.563 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact docker_exec_cmd] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:23", "Monday 20 August 2018 06:30:59 -0400 (0:00:00.079) 0:01:40.643 ********* ", "ok: [ceph-0 -> 192.168.24.12] => {\"ansible_facts\": {\"docker_exec_cmd\": \"docker exec ceph-mon-controller-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : is ceph running already?] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:34", "Monday 20 August 2018 06:31:00 -0400 (0:00:00.141) 0:01:40.784 ********* ", "ok: [ceph-0 -> 192.168.24.12] => {\"changed\": false, \"cmd\": [\"timeout\", \"5\", \"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"fsid\"], \"delta\": \"0:00:00.389777\", \"end\": \"2018-08-20 10:31:00.737159\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:31:00.347382\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"00d03b50-a460-11e8-8cf1-525400721501\", \"stdout_lines\": [\"00d03b50-a460-11e8-8cf1-525400721501\"]}", "", "TASK [ceph-defaults : check if /var/lib/mistral/overcloud/ceph-ansible/fetch_dir directory exists] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:47", "Monday 20 August 2018 06:31:00 -0400 (0:00:00.668) 0:01:41.453 ********* ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"stat\": {\"exists\": false}}", "", "TASK [ceph-defaults : set_fact ceph_current_fsid rc 1] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:57", "Monday 20 August 2018 06:31:00 -0400 (0:00:00.173) 0:01:41.626 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : create a local fetch directory if it does not exist] *****", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:64", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.049) 0:01:41.676 ********* ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"gid\": 42430, \"group\": \"mistral\", \"mode\": \"0755\", \"owner\": \"mistral\", \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir\", \"size\": 80, \"state\": \"directory\", \"uid\": 42430}", "", "TASK [ceph-defaults : set_fact fsid ceph_current_fsid.stdout] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:74", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.189) 0:01:41.865 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"fsid\": \"00d03b50-a460-11e8-8cf1-525400721501\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:81", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.067) 0:01:41.932 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"dummy\"}, \"changed\": false}", "", "TASK [ceph-defaults : generate cluster fsid] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:85", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.084) 0:01:42.016 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : reuse cluster fsid when cluster is already running] ******", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:96", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.044) 0:01:42.061 ********* ", "ok: [ceph-0 -> localhost] => {\"changed\": false, \"cmd\": \"echo 00d03b50-a460-11e8-8cf1-525400721501 | tee /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf\", \"rc\": 0, \"stdout\": \"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\", \"stdout_lines\": [\"skipped, since /var/lib/mistral/overcloud/ceph-ansible/fetch_dir/ceph_cluster_uuid.conf exists\"]}", "", "TASK [ceph-defaults : read cluster fsid if it already exists] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:105", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.183) 0:01:42.244 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact fsid] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:117", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.043) 0:01:42.288 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact mds_name ansible_hostname] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:123", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.044) 0:01:42.333 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"mds_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact mds_name ansible_fqdn] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:129", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.086) 0:01:42.419 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_owner ceph] ****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:135", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.049) 0:01:42.469 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_owner\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_group rbd_client_directory_group] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:142", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.070) 0:01:42.540 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_group\": \"ceph\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact rbd_client_directory_mode 0770] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:149", "Monday 20 August 2018 06:31:01 -0400 (0:00:00.072) 0:01:42.613 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"rbd_client_directory_mode\": \"0770\"}, \"changed\": false}", "", "TASK [ceph-defaults : resolve device link(s)] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:156", "Monday 20 August 2018 06:31:02 -0400 (0:00:00.075) 0:01:42.689 ********* ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002677\", \"end\": \"2018-08-20 10:31:02.227220\", \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.224543\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}", "ok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.002381\", \"end\": \"2018-08-20 10:31:02.387310\", \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.384929\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}", "ok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.002341\", \"end\": \"2018-08-20 10:31:02.537575\", \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.535234\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}", "ok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.002424\", \"end\": \"2018-08-20 10:31:02.686784\", \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.684360\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}", "ok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.002270\", \"end\": \"2018-08-20 10:31:02.829823\", \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.827553\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}", "", "TASK [ceph-defaults : set_fact build devices from resolved symlinks] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:166", "Monday 20 August 2018 06:31:02 -0400 (0:00:00.838) 0:01:43.527 ********* ", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.227220', '_ansible_no_log': False, u'stdout': u'/dev/vdb', u'cmd': [u'readlink', u'-f', u'/dev/vdb'], u'rc': 0, 'item': u'/dev/vdb', u'delta': u'0:00:00.002677', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdb', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdb'], u'start': u'2018-08-20 10:31:02.224543', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdb\"], \"delta\": \"0:00:00.002677\", \"end\": \"2018-08-20 10:31:02.227220\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.224543\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdb\", \"stdout_lines\": [\"/dev/vdb\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.387310', '_ansible_no_log': False, u'stdout': u'/dev/vdc', u'cmd': [u'readlink', u'-f', u'/dev/vdc'], u'rc': 0, 'item': u'/dev/vdc', u'delta': u'0:00:00.002381', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdc', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdc'], u'start': u'2018-08-20 10:31:02.384929', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdc\"], \"delta\": \"0:00:00.002381\", \"end\": \"2018-08-20 10:31:02.387310\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.384929\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdc\", \"stdout_lines\": [\"/dev/vdc\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.537575', '_ansible_no_log': False, u'stdout': u'/dev/vdd', u'cmd': [u'readlink', u'-f', u'/dev/vdd'], u'rc': 0, 'item': u'/dev/vdd', u'delta': u'0:00:00.002341', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdd', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdd'], u'start': u'2018-08-20 10:31:02.535234', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdd\"], \"delta\": \"0:00:00.002341\", \"end\": \"2018-08-20 10:31:02.537575\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.535234\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdd\", \"stdout_lines\": [\"/dev/vdd\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.686784', '_ansible_no_log': False, u'stdout': u'/dev/vde', u'cmd': [u'readlink', u'-f', u'/dev/vde'], u'rc': 0, 'item': u'/dev/vde', u'delta': u'0:00:00.002424', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vde', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vde'], u'start': u'2018-08-20 10:31:02.684360', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vde\"], \"delta\": \"0:00:00.002424\", \"end\": \"2018-08-20 10:31:02.686784\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.684360\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vde\", \"stdout_lines\": [\"/dev/vde\"]}}", "ok: [ceph-0] => (item={'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2018-08-20 10:31:02.829823', '_ansible_no_log': False, u'stdout': u'/dev/vdf', u'cmd': [u'readlink', u'-f', u'/dev/vdf'], u'rc': 0, 'item': u'/dev/vdf', u'delta': u'0:00:00.002270', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'readlink -f /dev/vdf', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, 'stdout_lines': [u'/dev/vdf'], u'start': u'2018-08-20 10:31:02.827553', '_ansible_ignore_errors': None, 'failed': False}) => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\", \"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false, \"item\": {\"changed\": false, \"cmd\": [\"readlink\", \"-f\", \"/dev/vdf\"], \"delta\": \"0:00:00.002270\", \"end\": \"2018-08-20 10:31:02.829823\", \"failed\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"readlink -f /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"rc\": 0, \"start\": \"2018-08-20 10:31:02.827553\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"/dev/vdf\", \"stdout_lines\": [\"/dev/vdf\"]}}", "", "TASK [ceph-defaults : set_fact build final devices list] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:175", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.273) 0:01:43.800 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"devices\": [\"/dev/vdb\", \"/dev/vdc\", \"/dev/vdd\", \"/dev/vde\", \"/dev/vdf\"]}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:183", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.213) 0:01:44.014 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - non container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:190", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.045) 0:01:44.059 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for debian based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:197", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.048) 0:01:44.108 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat based system - container] ***", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:204", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.050) 0:01:44.158 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_uid for red hat] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:211", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.053) 0:01:44.211 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_uid\": 167}, \"changed\": false}", "", "TASK [ceph-defaults : get current cluster status (if already running)] *********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:219", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.185) 0:01:44.397 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact ceph_current_status (convert to json)] **********", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:223", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.048) 0:01:44.445 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:227", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.046) 0:01:44.491 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-defaults : set_fact rgw_hostname] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:237", "Monday 20 August 2018 06:31:03 -0400 (0:00:00.045) 0:01:44.537 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"rgw_hostname\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-defaults : set_fact ceph_directories] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:2", "Monday 20 August 2018 06:31:04 -0400 (0:00:00.176) 0:01:44.713 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_directories\": [\"/etc/ceph\", \"/var/lib/ceph/\", \"/var/lib/ceph/mon\", \"/var/lib/ceph/osd\", \"/var/lib/ceph/mds\", \"/var/lib/ceph/tmp\", \"/var/lib/ceph/radosgw\", \"/var/lib/ceph/bootstrap-rgw\", \"/var/lib/ceph/bootstrap-mds\", \"/var/lib/ceph/bootstrap-osd\", \"/var/lib/ceph/bootstrap-rbd\", \"/var/run/ceph\"]}, \"changed\": false}", "", "TASK [ceph-defaults : create ceph initial directories] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/create_ceph_initial_dirs.yml:18", "Monday 20 August 2018 06:31:04 -0400 (0:00:00.178) 0:01:44.892 ********* ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mon) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mon\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mon\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/tmp) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/tmp\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/tmp\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/radosgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/radosgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/radosgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "changed: [ceph-0] => (item=/var/run/ceph) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/run/ceph\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/run/ceph\", \"secontext\": \"unconfined_u:object_r:var_run_t:s0\", \"size\": 40, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-docker-common : fail if systemd is not present] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/system_checks.yml:2", "Monday 20 August 2018 06:31:06 -0400 (0:00:01.907) 0:01:46.799 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure monitor_interface, monitor_address or monitor_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:2", "Monday 20 August 2018 06:31:06 -0400 (0:00:00.051) 0:01:46.850 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : make sure radosgw_interface, radosgw_address or radosgw_address_block is defined] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:11", "Monday 20 August 2018 06:31:06 -0400 (0:00:00.048) 0:01:46.899 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : warning deprecation for fqdn configuration] *********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/check_mandatory_vars.yml:20", "Monday 20 August 2018 06:31:06 -0400 (0:00:00.048) 0:01:46.948 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove ceph udev rules] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/pre_requisites/remove_ceph_udev_rules.yml:2", "Monday 20 August 2018 06:31:06 -0400 (0:00:00.042) 0:01:46.991 ********* ", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"path\": \"/usr/lib/udev/rules.d/95-ceph-osd.rules\", \"state\": \"absent\"}", "ok: [ceph-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) => {\"changed\": false, \"item\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"path\": \"/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules\", \"state\": \"absent\"}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_hostname] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:14", "Monday 20 August 2018 06:31:06 -0400 (0:00:00.391) 0:01:47.382 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"monitor_name\": \"ceph-0\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact monitor_name ansible_fqdn] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:20", "Monday 20 August 2018 06:31:06 -0400 (0:00:00.076) 0:01:47.459 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get docker version] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:26", "Monday 20 August 2018 06:31:06 -0400 (0:00:00.042) 0:01:47.501 ********* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"--version\"], \"delta\": \"0:00:00.019686\", \"end\": \"2018-08-20 10:31:07.027927\", \"rc\": 0, \"start\": \"2018-08-20 10:31:07.008241\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Docker version 1.13.1, build 6e3bb8e/1.13.1\", \"stdout_lines\": [\"Docker version 1.13.1, build 6e3bb8e/1.13.1\"]}", "", "TASK [ceph-docker-common : set_fact ceph_docker_version ceph_docker_version.stdout.split] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:32", "Monday 20 August 2018 06:31:07 -0400 (0:00:00.226) 0:01:47.727 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_docker_version\": \"1.13.1,\"}, \"changed\": false}", "", "TASK [ceph-docker-common : check if a cluster is already running] **************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:42", "Monday 20 August 2018 06:31:07 -0400 (0:00:00.075) 0:01:47.803 ********* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"ps\", \"-q\", \"--filter=name=ceph-mon-ceph-0\"], \"delta\": \"0:00:00.018754\", \"end\": \"2018-08-20 10:31:07.335491\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:31:07.316737\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:2", "Monday 20 August 2018 06:31:07 -0400 (0:00:00.229) 0:01:48.032 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact tmp_ceph_mgr_keys add mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:13", "Monday 20 August 2018 06:31:07 -0400 (0:00:00.088) 0:01:48.120 ********* ", "ok: [ceph-0] => (item=controller-0) => {\"ansible_facts\": {\"tmp_ceph_mgr_keys\": \"/etc/ceph/ceph.mgr.controller-0.keyring\"}, \"changed\": false, \"item\": \"controller-0\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_keys convert mgr keys to an array] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:20", "Monday 20 August 2018 06:31:07 -0400 (0:00:00.132) 0:01:48.253 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_mgr_keys\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_config_keys merge mgr keys to config and keys paths] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:25", "Monday 20 August 2018 06:31:07 -0400 (0:00:00.088) 0:01:48.342 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_config_keys\": [\"/etc/ceph/ceph.client.admin.keyring\", \"/etc/ceph/monmap-ceph\", \"/etc/ceph/ceph.mon.keyring\", \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"/etc/ceph/ceph.mgr.controller-0.keyring\"]}, \"changed\": false}", "", "TASK [ceph-docker-common : stat for ceph config and keys] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/stat_ceph_files.yml:30", "Monday 20 August 2018 06:31:07 -0400 (0:00:00.090) 0:01:48.433 ********* ", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.client.admin.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1534761026.5397818, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"32793d89de7819833a3849e42af57849c578f1ee\", \"ctime\": 1534761026.5397818, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464328, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.5397818, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/monmap-ceph) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mon.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1534761026.7087815, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"924bb9cec4772c247782ec43a790040656d3ab31\", \"ctime\": 1534761026.7077816, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464329, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.7077816, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1534761026.8687813, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"ctime\": 1534761026.8687813, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 26262221, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.8687813, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1534761027.030781, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd\", \"ctime\": 1534761027.030781, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30071520, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.030781, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1534761027.1997807, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"0b81209fa4aacb4370dae6fcb06b8a43d48ed42d\", \"ctime\": 1534761027.1987808, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 34251791, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.1987808, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1534761027.3837805, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"963a0d4350677a12a72614a09b2996d236b0a6d6\", \"ctime\": 1534761027.3837805, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 38394477, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.3837805, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "ok: [ceph-0 -> localhost] => (item=/etc/ceph/ceph.mgr.controller-0.keyring) => {\"changed\": false, \"failed_when_result\": false, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1534761029.036778, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"ctime\": 1534761029.0357778, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464330, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761029.0357778, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}", "", "TASK [ceph-docker-common : fail if we find existing cluster files] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks.yml:5", "Monday 20 August 2018 06:31:09 -0400 (0:00:01.314) 0:01:49.747 ********* ", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.client.admin.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761026.5397818, u'block_size': 4096, u'inode': 9464328, u'isgid': False, u'size': 159, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'xusr': False, u'atime': 1534761026.5397818, u'mimetype': u'unknown', u'ctime': 1534761026.5397818, u'isblk': False, u'checksum': u'32793d89de7819833a3849e42af57849c578f1ee', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.client.admin.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.client.admin.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.client.admin.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.client.admin.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\"}}, \"item\": \"/etc/ceph/ceph.client.admin.keyring\", \"stat\": {\"atime\": 1534761026.5397818, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"32793d89de7819833a3849e42af57849c578f1ee\", \"ctime\": 1534761026.5397818, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464328, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.5397818, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.client.admin.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 159, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/monmap-ceph', {'_ansible_parsed': True, u'stat': {u'exists': False}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/monmap-ceph', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/monmap-ceph', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/monmap-ceph'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/monmap-ceph\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/monmap-ceph\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/monmap-ceph\"}}, \"item\": \"/etc/ceph/monmap-ceph\", \"stat\": {\"exists\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mon.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761026.7077816, u'block_size': 4096, u'inode': 9464329, u'isgid': False, u'size': 688, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'xusr': False, u'atime': 1534761026.7087815, u'mimetype': u'unknown', u'ctime': 1534761026.7077816, u'isblk': False, u'checksum': u'924bb9cec4772c247782ec43a790040656d3ab31', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mon.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mon.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mon.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mon.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\"}}, \"item\": \"/etc/ceph/ceph.mon.keyring\", \"stat\": {\"atime\": 1534761026.7087815, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"924bb9cec4772c247782ec43a790040656d3ab31\", \"ctime\": 1534761026.7077816, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464329, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.7077816, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mon.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 688, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-osd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761026.8687813, u'block_size': 4096, u'inode': 26262221, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'xusr': False, u'atime': 1534761026.8687813, u'mimetype': u'unknown', u'ctime': 1534761026.8687813, u'isblk': False, u'checksum': u'698d347fdbde95d7d515a3d48d03b13806292388', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-osd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-osd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"stat\": {\"atime\": 1534761026.8687813, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"ctime\": 1534761026.8687813, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 26262221, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761026.8687813, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-osd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761027.030781, u'block_size': 4096, u'inode': 30071520, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'xusr': False, u'atime': 1534761027.030781, u'mimetype': u'unknown', u'ctime': 1534761027.030781, u'isblk': False, u'checksum': u'5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rgw/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"stat\": {\"atime\": 1534761027.030781, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"5bcbaa0f982340c854eb6e3f68b1f1e3c6757cfd\", \"ctime\": 1534761027.030781, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 30071520, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.030781, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-mds/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761027.1987808, u'block_size': 4096, u'inode': 34251791, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'xusr': False, u'atime': 1534761027.1997807, u'mimetype': u'unknown', u'ctime': 1534761027.1987808, u'isblk': False, u'checksum': u'0b81209fa4aacb4370dae6fcb06b8a43d48ed42d', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-mds/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-mds/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-mds/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-mds/ceph.keyring\", \"stat\": {\"atime\": 1534761027.1997807, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"0b81209fa4aacb4370dae6fcb06b8a43d48ed42d\", \"ctime\": 1534761027.1987808, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 34251791, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.1987808, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-mds/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761027.3837805, u'block_size': 4096, u'inode': 38394477, u'isgid': False, u'size': 113, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'xusr': False, u'atime': 1534761027.3837805, u'mimetype': u'unknown', u'ctime': 1534761027.3837805, u'isblk': False, u'checksum': u'963a0d4350677a12a72614a09b2996d236b0a6d6', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/var/lib/ceph/bootstrap-rbd/ceph.keyring'}]) => {\"changed\": false, \"item\": [\"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\"}}, \"item\": \"/var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"stat\": {\"atime\": 1534761027.3837805, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"963a0d4350677a12a72614a09b2996d236b0a6d6\", \"ctime\": 1534761027.3837805, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 38394477, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761027.3837805, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 113, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[u'/etc/ceph/ceph.mgr.controller-0.keyring', {'_ansible_parsed': True, u'stat': {u'charset': u'unknown', u'uid': 42430, u'exists': True, u'attr_flags': u'', u'woth': False, u'isreg': True, u'device_type': 0, u'mtime': 1534761029.0357778, u'block_size': 4096, u'inode': 9464330, u'isgid': False, u'size': 67, u'executable': False, u'roth': True, u'isuid': False, u'readable': True, u'version': None, u'pw_name': u'mistral', u'gid': 42430, u'ischr': False, u'wusr': True, u'writeable': True, u'isdir': False, u'blocks': 8, u'xoth': False, u'rusr': True, u'nlink': 1, u'issock': False, u'rgrp': True, u'gr_name': u'mistral', u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring', u'xusr': False, u'atime': 1534761029.036778, u'mimetype': u'unknown', u'ctime': 1534761029.0357778, u'isblk': False, u'checksum': u'557a22485a6e0bcdb875a5f5926bdb3409555b7d', u'dev': 64769, u'wgrp': False, u'isfifo': False, u'mode': u'0644', u'xgrp': False, u'islnk': False, u'attributes': []}, '_ansible_item_result': True, '_ansible_no_log': False, '_ansible_delegated_vars': {'ansible_delegated_host': u'localhost', 'ansible_host': u'localhost'}, u'changed': False, 'failed': False, 'item': u'/etc/ceph/ceph.mgr.controller-0.keyring', u'invocation': {u'module_args': {u'checksum_algorithm': u'sha1', u'get_checksum': True, u'follow': False, u'path': u'/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring', u'get_md5': None, u'get_mime': True, u'get_attributes': True}}, 'failed_when_result': False, '_ansible_ignore_errors': None, '_ansible_item_label': u'/etc/ceph/ceph.mgr.controller-0.keyring'}]) => {\"changed\": false, \"item\": [\"/etc/ceph/ceph.mgr.controller-0.keyring\", {\"_ansible_delegated_vars\": {\"ansible_delegated_host\": \"localhost\", \"ansible_host\": \"localhost\"}, \"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"checksum_algorithm\": \"sha1\", \"follow\": false, \"get_attributes\": true, \"get_checksum\": true, \"get_md5\": null, \"get_mime\": true, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring\"}}, \"item\": \"/etc/ceph/ceph.mgr.controller-0.keyring\", \"stat\": {\"atime\": 1534761029.036778, \"attr_flags\": \"\", \"attributes\": [], \"block_size\": 4096, \"blocks\": 8, \"charset\": \"unknown\", \"checksum\": \"557a22485a6e0bcdb875a5f5926bdb3409555b7d\", \"ctime\": 1534761029.0357778, \"dev\": 64769, \"device_type\": 0, \"executable\": false, \"exists\": true, \"gid\": 42430, \"gr_name\": \"mistral\", \"inode\": 9464330, \"isblk\": false, \"ischr\": false, \"isdir\": false, \"isfifo\": false, \"isgid\": false, \"islnk\": false, \"isreg\": true, \"issock\": false, \"isuid\": false, \"mimetype\": \"unknown\", \"mode\": \"0644\", \"mtime\": 1534761029.0357778, \"nlink\": 1, \"path\": \"/var/lib/mistral/overcloud/ceph-ansible/fetch_dir/00d03b50-a460-11e8-8cf1-525400721501//etc/ceph/ceph.mgr.controller-0.keyring\", \"pw_name\": \"mistral\", \"readable\": true, \"rgrp\": true, \"roth\": true, \"rusr\": true, \"size\": 67, \"uid\": 42430, \"version\": null, \"wgrp\": false, \"woth\": false, \"writeable\": true, \"wusr\": true, \"xgrp\": false, \"xoth\": false, \"xusr\": false}}], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on atomic] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_atomic.yml:2", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.318) 0:01:50.066 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_atomic.yml:6", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.039) 0:01:50.106 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on redhat or suse] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:2", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.040) 0:01:50.146 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on redhat or suse] **********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_rpm.yml:13", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.048) 0:01:50.195 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_rpm.yml:7", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.046) 0:01:50.241 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : check ntp installation on debian] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:2", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.046) 0:01:50.288 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : install ntp on debian] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/checks/check_ntp_debian.yml:11", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.053) 0:01:50.341 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : start the ntp service] ******************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/misc/ntp_debian.yml:7", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.046) 0:01:50.387 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mon container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:3", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.049) 0:01:50.436 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph osd container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:12", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.054) 0:01:50.491 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mds container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:21", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.054) 0:01:50.545 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rgw container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:30", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.050) 0:01:50.596 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph mgr container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:39", "Monday 20 August 2018 06:31:09 -0400 (0:00:00.042) 0:01:50.639 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph rbd mirror container] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:48", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:50.685 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspect ceph nfs container] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:57", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.044) 0:01:50.729 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mon container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:67", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.055) 0:01:50.784 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph osd container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:76", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.046) 0:01:50.831 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rgw container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:85", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.046) 0:01:50.877 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mds container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:94", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.040) 0:01:50.918 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph mgr container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:103", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:50.963 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph rbd mirror container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:112", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.042) 0:01:51.005 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : inspecting ceph nfs container image before pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:121", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.050) 0:01:51.055 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:130", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.044) 0:01:51.100 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:137", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:51.146 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:144", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.051) 0:01:51.198 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:151", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:51.243 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:158", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.044) 0:01:51.287 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:165", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.048) 0:01:51.336 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_repodigest_before_pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:172", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.050) 0:01:51.386 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : pulling 192.168.24.1:8787/rhceph:3-11 image] ********", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:179", "Monday 20 August 2018 06:31:10 -0400 (0:00:00.045) 0:01:51.432 ********* ", "ok: [ceph-0] => {\"attempts\": 1, \"changed\": false, \"cmd\": [\"timeout\", \"300s\", \"docker\", \"pull\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:12.664455\", \"end\": \"2018-08-20 10:31:23.610575\", \"rc\": 0, \"start\": \"2018-08-20 10:31:10.946120\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"Trying to pull repository 192.168.24.1:8787/rhceph ... \\n3-11: Pulling from 192.168.24.1:8787/rhceph\\nd02c3bd49e78: Pulling fs layer\\n475b0168c252: Pulling fs layer\\n9cc28bc5e4f9: Pulling fs layer\\n475b0168c252: Download complete\\nd02c3bd49e78: Verifying Checksum\\nd02c3bd49e78: Download complete\\n9cc28bc5e4f9: Verifying Checksum\\n9cc28bc5e4f9: Download complete\\nd02c3bd49e78: Pull complete\\n475b0168c252: Pull complete\\n9cc28bc5e4f9: Pull complete\\nDigest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\nStatus: Downloaded newer image for 192.168.24.1:8787/rhceph:3-11\", \"stdout_lines\": [\"Trying to pull repository 192.168.24.1:8787/rhceph ... \", \"3-11: Pulling from 192.168.24.1:8787/rhceph\", \"d02c3bd49e78: Pulling fs layer\", \"475b0168c252: Pulling fs layer\", \"9cc28bc5e4f9: Pulling fs layer\", \"475b0168c252: Download complete\", \"d02c3bd49e78: Verifying Checksum\", \"d02c3bd49e78: Download complete\", \"9cc28bc5e4f9: Verifying Checksum\", \"9cc28bc5e4f9: Download complete\", \"d02c3bd49e78: Pull complete\", \"475b0168c252: Pull complete\", \"9cc28bc5e4f9: Pull complete\", \"Digest: sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\", \"Status: Downloaded newer image for 192.168.24.1:8787/rhceph:3-11\"]}", "", "TASK [ceph-docker-common : inspecting 192.168.24.1:8787/rhceph:3-11 image after pulling] ***", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:189", "Monday 20 August 2018 06:31:23 -0400 (0:00:12.883) 0:02:04.316 ********* ", "changed: [ceph-0] => {\"changed\": true, \"cmd\": [\"docker\", \"inspect\", \"192.168.24.1:8787/rhceph:3-11\"], \"delta\": \"0:00:00.022495\", \"end\": \"2018-08-20 10:31:23.851040\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:31:23.828545\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"[\\n {\\n \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\\n \\\"RepoTags\\\": [\\n \\\"192.168.24.1:8787/rhceph:3-11\\\"\\n ],\\n \\\"RepoDigests\\\": [\\n \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\\n ],\\n \\\"Parent\\\": \\\"\\\",\\n \\\"Comment\\\": \\\"\\\",\\n \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\\n \\\"Container\\\": \\\"\\\",\\n \\\"ContainerConfig\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": [\\n \\\"/bin/sh\\\",\\n \\\"-c\\\",\\n \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\\n ],\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"DockerVersion\\\": \\\"1.12.6\\\",\\n \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\\n \\\"Config\\\": {\\n \\\"Hostname\\\": \\\"b82aed11f771\\\",\\n \\\"Domainname\\\": \\\"\\\",\\n \\\"User\\\": \\\"\\\",\\n \\\"AttachStdin\\\": false,\\n \\\"AttachStdout\\\": false,\\n \\\"AttachStderr\\\": false,\\n \\\"ExposedPorts\\\": {\\n \\\"5000/tcp\\\": {},\\n \\\"6789/tcp\\\": {},\\n \\\"6800/tcp\\\": {},\\n \\\"6801/tcp\\\": {},\\n \\\"6802/tcp\\\": {},\\n \\\"6803/tcp\\\": {},\\n \\\"6804/tcp\\\": {},\\n \\\"6805/tcp\\\": {},\\n \\\"80/tcp\\\": {}\\n },\\n \\\"Tty\\\": false,\\n \\\"OpenStdin\\\": false,\\n \\\"StdinOnce\\\": false,\\n \\\"Env\\\": [\\n \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\\n \\\"container=oci\\\",\\n \\\"CEPH_VERSION=luminous\\\",\\n \\\"CEPH_POINT_RELEASE=\\\"\\n ],\\n \\\"Cmd\\\": null,\\n \\\"ArgsEscaped\\\": true,\\n \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\\n \\\"Volumes\\\": null,\\n \\\"WorkingDir\\\": \\\"/\\\",\\n \\\"Entrypoint\\\": [\\n \\\"/entrypoint.sh\\\"\\n ],\\n \\\"OnBuild\\\": [],\\n \\\"Labels\\\": {\\n \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\\n \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\\n \\\"GIT_CLEAN\\\": \\\"True\\\",\\n \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\\n \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\\n \\\"RELEASE\\\": \\\"stable-3.0\\\",\\n \\\"architecture\\\": \\\"x86_64\\\",\\n \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\\n \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\\n \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\\n \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\\n \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"distribution-scope\\\": \\\"public\\\",\\n \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\\n \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\\n \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\\n \\\"io.openshift.expose-services\\\": \\\"\\\",\\n \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\\n \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\\n \\\"name\\\": \\\"rhceph\\\",\\n \\\"release\\\": \\\"11\\\",\\n \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\\n \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\\n \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\\n \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\\n \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\\n \\\"vcs-type\\\": \\\"git\\\",\\n \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\\n \\\"version\\\": \\\"3\\\"\\n }\\n },\\n \\\"Architecture\\\": \\\"amd64\\\",\\n \\\"Os\\\": \\\"linux\\\",\\n \\\"Size\\\": 616048717,\\n \\\"VirtualSize\\\": 616048717,\\n \\\"GraphDriver\\\": {\\n \\\"Name\\\": \\\"overlay2\\\",\\n \\\"Data\\\": {\\n \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/18a142311a48efd57707657709b7e403db31f660db7f02e0cc514775dc4b6ac8/diff:/var/lib/docker/overlay2/724e96af25c6a782ebb1570fc169a5d43b3ee2e8bb616c568ba70d5106537d58/diff\\\",\\n \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/merged\\\",\\n \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/diff\\\",\\n \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/work\\\"\\n }\\n },\\n \\\"RootFS\\\": {\\n \\\"Type\\\": \\\"layers\\\",\\n \\\"Layers\\\": [\\n \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\\n \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\\n \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\\n ]\\n }\\n }\\n]\", \"stdout_lines\": [\"[\", \" {\", \" \\\"Id\\\": \\\"sha256:fac62128c457eba3704e9095b20310acef7d9069d092f3fff70aac590f36e5f5\\\",\", \" \\\"RepoTags\\\": [\", \" \\\"192.168.24.1:8787/rhceph:3-11\\\"\", \" ],\", \" \\\"RepoDigests\\\": [\", \" \\\"192.168.24.1:8787/rhceph@sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\\\"\", \" ],\", \" \\\"Parent\\\": \\\"\\\",\", \" \\\"Comment\\\": \\\"\\\",\", \" \\\"Created\\\": \\\"2018-07-06T17:32:24.980232Z\\\",\", \" \\\"Container\\\": \\\"\\\",\", \" \\\"ContainerConfig\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": [\", \" \\\"/bin/sh\\\",\", \" \\\"-c\\\",\", \" \\\"rm -f '/etc/yum.repos.d/rhceph-rhel7-3.0-z4.repo'\\\"\", \" ],\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"sha256:bcfe5600e9f2dc71e5c79b8b481aa6d7c9ee011a998ec60f175d2da8ec1cc72d\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"DockerVersion\\\": \\\"1.12.6\\\",\", \" \\\"Author\\\": \\\"Erwan Velu <evelu@redhat.com>\\\",\", \" \\\"Config\\\": {\", \" \\\"Hostname\\\": \\\"b82aed11f771\\\",\", \" \\\"Domainname\\\": \\\"\\\",\", \" \\\"User\\\": \\\"\\\",\", \" \\\"AttachStdin\\\": false,\", \" \\\"AttachStdout\\\": false,\", \" \\\"AttachStderr\\\": false,\", \" \\\"ExposedPorts\\\": {\", \" \\\"5000/tcp\\\": {},\", \" \\\"6789/tcp\\\": {},\", \" \\\"6800/tcp\\\": {},\", \" \\\"6801/tcp\\\": {},\", \" \\\"6802/tcp\\\": {},\", \" \\\"6803/tcp\\\": {},\", \" \\\"6804/tcp\\\": {},\", \" \\\"6805/tcp\\\": {},\", \" \\\"80/tcp\\\": {}\", \" },\", \" \\\"Tty\\\": false,\", \" \\\"OpenStdin\\\": false,\", \" \\\"StdinOnce\\\": false,\", \" \\\"Env\\\": [\", \" \\\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\\",\", \" \\\"container=oci\\\",\", \" \\\"CEPH_VERSION=luminous\\\",\", \" \\\"CEPH_POINT_RELEASE=\\\"\", \" ],\", \" \\\"Cmd\\\": null,\", \" \\\"ArgsEscaped\\\": true,\", \" \\\"Image\\\": \\\"5e5075b5d174991eca331d93e54f80b46b085e141214f618270a1e099d7dc7c3\\\",\", \" \\\"Volumes\\\": null,\", \" \\\"WorkingDir\\\": \\\"/\\\",\", \" \\\"Entrypoint\\\": [\", \" \\\"/entrypoint.sh\\\"\", \" ],\", \" \\\"OnBuild\\\": [],\", \" \\\"Labels\\\": {\", \" \\\"CEPH_POINT_RELEASE\\\": \\\"\\\",\", \" \\\"GIT_BRANCH\\\": \\\"stable-3.0\\\",\", \" \\\"GIT_CLEAN\\\": \\\"True\\\",\", \" \\\"GIT_COMMIT\\\": \\\"22fcc9ded777159b5b2689eead2640fa4b91682d\\\",\", \" \\\"GIT_REPO\\\": \\\"git@github.com:ceph/ceph-container.git\\\",\", \" \\\"RELEASE\\\": \\\"stable-3.0\\\",\", \" \\\"architecture\\\": \\\"x86_64\\\",\", \" \\\"authoritative-source-url\\\": \\\"registry.access.redhat.com\\\",\", \" \\\"build-date\\\": \\\"2018-07-06T17:29:12.794306\\\",\", \" \\\"com.redhat.build-host\\\": \\\"osbs-cpt-011.ocp.osbs.upshift.eng.rdu2.redhat.com\\\",\", \" \\\"com.redhat.component\\\": \\\"rhceph-rhel7-container\\\",\", \" \\\"description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"distribution-scope\\\": \\\"public\\\",\", \" \\\"install\\\": \\\"/usr/bin/docker run --rm --privileged -v /:/host -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -e MON_NAME=${MON_NAME} -e OSD_DEVICE=${OSD_DEVICE} -e HOST=/host -e IMAGE=${IMAGE} --entrypoint=/install.sh ${IMAGE}\\\",\", \" \\\"io.k8s.description\\\": \\\"Red Hat Ceph Storage 3\\\",\", \" \\\"io.k8s.display-name\\\": \\\"Red Hat Ceph Storage 3 on RHEL 7\\\",\", \" \\\"io.openshift.expose-services\\\": \\\"\\\",\", \" \\\"io.openshift.tags\\\": \\\"rhceph ceph\\\",\", \" \\\"maintainer\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"name\\\": \\\"rhceph\\\",\", \" \\\"release\\\": \\\"11\\\",\", \" \\\"run\\\": \\\"/usr/bin/docker run -d --net=host --pid=host -e MON_NAME=${MON_NAME} -e MON_IP=${MON_IP} -e CEPH_PUBLIC_NETWORK=${CEPH_PUBLIC_NETWORK} -e CEPH_DAEMON=${CEPH_DAEMON} -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ${IMAGE}\\\",\", \" \\\"summary\\\": \\\"Provides the latest Red Hat Ceph Storage 3 on RHEL 7 in a fully featured and supported base image.\\\",\", \" \\\"url\\\": \\\"https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/3-11\\\",\", \" \\\"usage\\\": \\\"This image is very generic and does not serve a single use case. Use it as a base to build your own images.\\\",\", \" \\\"vcs-ref\\\": \\\"b755c664223f3368750a39c084255a26aa9667cc\\\",\", \" \\\"vcs-type\\\": \\\"git\\\",\", \" \\\"vendor\\\": \\\"Red Hat, Inc.\\\",\", \" \\\"version\\\": \\\"3\\\"\", \" }\", \" },\", \" \\\"Architecture\\\": \\\"amd64\\\",\", \" \\\"Os\\\": \\\"linux\\\",\", \" \\\"Size\\\": 616048717,\", \" \\\"VirtualSize\\\": 616048717,\", \" \\\"GraphDriver\\\": {\", \" \\\"Name\\\": \\\"overlay2\\\",\", \" \\\"Data\\\": {\", \" \\\"LowerDir\\\": \\\"/var/lib/docker/overlay2/18a142311a48efd57707657709b7e403db31f660db7f02e0cc514775dc4b6ac8/diff:/var/lib/docker/overlay2/724e96af25c6a782ebb1570fc169a5d43b3ee2e8bb616c568ba70d5106537d58/diff\\\",\", \" \\\"MergedDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/merged\\\",\", \" \\\"UpperDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/diff\\\",\", \" \\\"WorkDir\\\": \\\"/var/lib/docker/overlay2/0763fb7e309d45133ae51c522f848bb36983087fb26b0a23fec71e16dbef6938/work\\\"\", \" }\", \" },\", \" \\\"RootFS\\\": {\", \" \\\"Type\\\": \\\"layers\\\",\", \" \\\"Layers\\\": [\", \" \\\"sha256:24a5c6254cd9693d64581b6f3df5e4ee551cfd5429cf25301d12afa82ac91037\\\",\", \" \\\"sha256:9a001a3500e22038e448212dac414fe1f876024e85874f014624581b9c0332e3\\\",\", \" \\\"sha256:1a3f447d46a2deec87fb651eb0b69e1eec48de92cb1e2134e2f92149094c0025\\\"\", \" ]\", \" }\", \" }\", \"]\"]}", "", "TASK [ceph-docker-common : set_fact image_repodigest_after_pulling] ************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:194", "Monday 20 August 2018 06:31:23 -0400 (0:00:00.245) 0:02:04.561 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"image_repodigest_after_pulling\": \"sha256:02b1d84525020e331b2800dee594b0ccc6ff97d1bfbd0326dcb8616412e4c64e\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_mon_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:200", "Monday 20 August 2018 06:31:23 -0400 (0:00:00.086) 0:02:04.647 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_osd_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:211", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.056) 0:02:04.704 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mds_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:222", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.049) 0:02:04.754 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rgw_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:233", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.043) 0:02:04.797 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_mgr_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:244", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.044) 0:02:04.842 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_rbd_mirror_image_updated] *************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:255", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.044) 0:02:04.887 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_nfs_image_updated] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:266", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.044) 0:02:04.932 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : export local ceph dev image] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:277", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.051) 0:02:04.983 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : copy ceph dev image file] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:285", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.044) 0:02:05.028 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : load ceph dev image] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:292", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.045) 0:02:05.073 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : remove tmp ceph dev image file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/fetch_image.yml:297", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.042) 0:02:05.116 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : get ceph version] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:84", "Monday 20 August 2018 06:31:24 -0400 (0:00:00.045) 0:02:05.161 ********* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint\", \"/usr/bin/ceph\", \"192.168.24.1:8787/rhceph:3-11\", \"--version\"], \"delta\": \"0:00:00.433727\", \"end\": \"2018-08-20 10:31:25.105873\", \"rc\": 0, \"start\": \"2018-08-20 10:31:24.672146\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\", \"stdout_lines\": [\"ceph version 12.2.4-30.el7cp (efcc05dbe834f3facbf62774d7709c40ace9d9ae) luminous (stable)\"]}", "", "TASK [ceph-docker-common : set_fact ceph_version ceph_version.stdout.split] ****", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/main.yml:90", "Monday 20 August 2018 06:31:25 -0400 (0:00:00.644) 0:02:05.806 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_version\": \"12.2.4-30.el7cp\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release jewel] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:2", "Monday 20 August 2018 06:31:25 -0400 (0:00:00.176) 0:02:05.983 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release kraken] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:8", "Monday 20 August 2018 06:31:25 -0400 (0:00:00.043) 0:02:06.026 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release luminous] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:14", "Monday 20 August 2018 06:31:25 -0400 (0:00:00.043) 0:02:06.070 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"ceph_release\": \"luminous\"}, \"changed\": false}", "", "TASK [ceph-docker-common : set_fact ceph_release mimic] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:20", "Monday 20 August 2018 06:31:25 -0400 (0:00:00.069) 0:02:06.140 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : set_fact ceph_release nautilus] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/release.yml:26", "Monday 20 August 2018 06:31:25 -0400 (0:00:00.050) 0:02:06.190 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-docker-common : create bootstrap directories] ***********************", "task path: /usr/share/ceph-ansible/roles/ceph-docker-common/tasks/dirs_permissions.yml:2", "Monday 20 August 2018 06:31:25 -0400 (0:00:00.057) 0:02:06.248 ********* ", "changed: [ceph-0] => (item=/etc/ceph) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/etc/ceph\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-osd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-osd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-mds) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-mds\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-mds\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rgw) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rgw\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rgw\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-rbd) => {\"changed\": true, \"gid\": 64045, \"group\": \"64045\", \"item\": \"/var/lib/ceph/bootstrap-rbd\", \"mode\": \"0755\", \"owner\": \"64045\", \"path\": \"/var/lib/ceph/bootstrap-rbd\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 64045}", "", "TASK [ceph-config : create ceph conf directory] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:4", "Monday 20 August 2018 06:31:26 -0400 (0:00:00.950) 0:02:07.199 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate ceph configuration file: ceph.conf] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:12", "Monday 20 August 2018 06:31:26 -0400 (0:00:00.043) 0:02:07.242 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : create a local fetch directory if it does not exist] *******", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:38", "Monday 20 August 2018 06:31:26 -0400 (0:00:00.045) 0:02:07.288 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : generate cluster uuid] *************************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:54", "Monday 20 August 2018 06:31:26 -0400 (0:00:00.053) 0:02:07.341 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : read cluster uuid if it already exists] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:64", "Monday 20 August 2018 06:31:26 -0400 (0:00:00.045) 0:02:07.387 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-config : ensure /etc/ceph exists] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76", "Monday 20 August 2018 06:31:26 -0400 (0:00:00.042) 0:02:07.429 ********* ", "changed: [ceph-0] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-config : generate ceph.conf configuration file] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84", "Monday 20 August 2018 06:31:27 -0400 (0:00:00.314) 0:02:07.744 ********* ", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mon restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mon daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mon_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy osd restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph osds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _osd_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mds restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mds daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mds_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rgw restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rgw daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rgw_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy mgr restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph mgr daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _mgr_handler_called after restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called before restart for ceph-0", "NOTIFIED HANDLER ceph-defaults : copy rbd mirror restart script for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - non container for ceph-0", "NOTIFIED HANDLER ceph-defaults : restart ceph rbd mirror daemon(s) - container for ceph-0", "NOTIFIED HANDLER ceph-defaults : set _rbdmirror_handler_called after restart for ceph-0", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"e4920028e2dd848015696ddfcacfa786c16605f9\", \"dest\": \"/etc/ceph/ceph.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"d5a7dc456ede0edf6350f8cd7ff9f719\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 1213, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761087.24-274813996358616/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-config : set fsid fact when generate_fsid = true] *******************", "task path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:102", "Monday 20 August 2018 06:31:29 -0400 (0:00:02.059) 0:02:09.803 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure public_network configured] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:2", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.051) 0:02:09.855 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure cluster_network configured] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:8", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.046) 0:02:09.902 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure journal_size configured] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:15", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.047) 0:02:09.949 ********* ", "ok: [ceph-0] => {", " \"msg\": \"WARNING: journal_size is configured to 512, which is less than 5GB. This is not recommended and can lead to severe issues.\"", "}", "", "TASK [ceph-osd : make sure an osd scenario was chosen] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:23", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.085) 0:02:10.035 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure a valid osd scenario was chosen] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:31", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.062) 0:02:10.098 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify devices have been provided] ****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:39", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.055) 0:02:10.153 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if osd_scenario lvm is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:49", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.073) 0:02:10.227 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify lvm_volumes have been provided] ************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:59", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.057) 0:02:10.284 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the lvm_volumes variable is a list] *****************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:69", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.053) 0:02:10.338 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the devices variable is a list] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:79", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.052) 0:02:10.390 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : verify dedicated devices have been provided] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:88", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.059) 0:02:10.450 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : make sure the dedicated_devices variable is a list] ***********", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:98", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.053) 0:02:10.503 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : check if bluestore is supported by the selected ceph version] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_mandatory_vars.yml:109", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.055) 0:02:10.559 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include system_tuning.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:5", "Monday 20 August 2018 06:31:29 -0400 (0:00:00.048) 0:02:10.608 ********* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml for ceph-0", "", "TASK [ceph-osd : disable osd directory parsing by updatedb] ********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:2", "Monday 20 August 2018 06:31:30 -0400 (0:00:00.079) 0:02:10.687 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : disable osd directory path in updatedb.conf] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:11", "Monday 20 August 2018 06:31:30 -0400 (0:00:00.043) 0:02:10.731 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : create tmpfiles.d directory] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:22", "Monday 20 August 2018 06:31:30 -0400 (0:00:00.050) 0:02:10.782 ********* ", "ok: [ceph-0] => {\"changed\": false, \"gid\": 0, \"group\": \"root\", \"mode\": \"0755\", \"owner\": \"root\", \"path\": \"/etc/tmpfiles.d\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 0}", "", "TASK [ceph-osd : disable transparent hugepage] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:33", "Monday 20 August 2018 06:31:30 -0400 (0:00:00.323) 0:02:11.105 ********* ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"e000059a4cfd8ce350b13f14305a46eaf99849ba\", \"dest\": \"/etc/tmpfiles.d/ceph_transparent_hugepage.conf\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"21ac872f3aa1fb44b01d4f7ab00a35fc\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:etc_t:s0\", \"size\": 158, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761090.6-255378817369816/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : get default vm.min_free_kbytes] *******************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:45", "Monday 20 August 2018 06:31:31 -0400 (0:00:00.604) 0:02:11.710 ********* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"sysctl\", \"-b\", \"vm.min_free_kbytes\"], \"delta\": \"0:00:00.004716\", \"end\": \"2018-08-20 10:31:31.242585\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:31:31.237869\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"67584\", \"stdout_lines\": [\"67584\"]}", "", "TASK [ceph-osd : set_fact vm_min_free_kbytes] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:52", "Monday 20 August 2018 06:31:31 -0400 (0:00:00.234) 0:02:11.945 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"vm_min_free_kbytes\": \"67584\"}, \"changed\": false}", "", "TASK [ceph-osd : apply operating system tuning] ********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/system_tuning.yml:56", "Monday 20 August 2018 06:31:31 -0400 (0:00:00.201) 0:02:12.147 ********* ", "changed: [ceph-0] => (item={u'enable': u\"(osd_objectstore == 'bluestore')\", u'name': u'fs.aio-max-nr', u'value': u'1048576'}) => {\"changed\": true, \"item\": {\"enable\": \"(osd_objectstore == 'bluestore')\", \"name\": \"fs.aio-max-nr\", \"value\": \"1048576\"}}", "changed: [ceph-0] => (item={u'name': u'fs.file-max', u'value': 26234859}) => {\"changed\": true, \"item\": {\"name\": \"fs.file-max\", \"value\": 26234859}}", "changed: [ceph-0] => (item={u'name': u'vm.zone_reclaim_mode', u'value': 0}) => {\"changed\": true, \"item\": {\"name\": \"vm.zone_reclaim_mode\", \"value\": 0}}", "changed: [ceph-0] => (item={u'name': u'vm.swappiness', u'value': 10}) => {\"changed\": true, \"item\": {\"name\": \"vm.swappiness\", \"value\": 10}}", "changed: [ceph-0] => (item={u'name': u'vm.min_free_kbytes', u'value': u'67584'}) => {\"changed\": true, \"item\": {\"name\": \"vm.min_free_kbytes\", \"value\": \"67584\"}}", "", "TASK [ceph-osd : install dependencies] *****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:10", "Monday 20 August 2018 06:31:32 -0400 (0:00:01.135) 0:02:13.282 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include common.yml] *******************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:18", "Monday 20 August 2018 06:31:32 -0400 (0:00:00.043) 0:02:13.326 ********* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml for ceph-0", "", "TASK [ceph-osd : create bootstrap-osd and osd directories] *********************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:2", "Monday 20 August 2018 06:31:32 -0400 (0:00:00.075) 0:02:13.402 ********* ", "changed: [ceph-0] => (item=/var/lib/ceph/bootstrap-osd/) => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/bootstrap-osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/bootstrap-osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "ok: [ceph-0] => (item=/var/lib/ceph/osd/) => {\"changed\": false, \"gid\": 167, \"group\": \"167\", \"item\": \"/var/lib/ceph/osd/\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/var/lib/ceph/osd/\", \"secontext\": \"unconfined_u:object_r:var_lib_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}", "", "TASK [ceph-osd : copy ceph key(s) if needed] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/common.yml:15", "Monday 20 August 2018 06:31:33 -0400 (0:00:00.476) 0:02:13.878 ********* ", "changed: [ceph-0] => (item={u'name': u'/var/lib/ceph/bootstrap-osd/ceph.keyring', u'copy_key': True}) => {\"changed\": true, \"checksum\": \"698d347fdbde95d7d515a3d48d03b13806292388\", \"dest\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"gid\": 167, \"group\": \"167\", \"item\": {\"copy_key\": true, \"name\": \"/var/lib/ceph/bootstrap-osd/ceph.keyring\"}, \"md5sum\": \"e32a66ddc038f6331ba8cd3a3e75084e\", \"mode\": \"0600\", \"owner\": \"167\", \"secontext\": \"system_u:object_r:var_lib_t:s0\", \"size\": 113, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761093.39-275011204224461/source\", \"state\": \"file\", \"uid\": 167}", "skipping: [ceph-0] => (item={u'name': u'/etc/ceph/ceph.client.admin.keyring', u'copy_key': False}) => {\"changed\": false, \"item\": {\"copy_key\": false, \"name\": \"/etc/ceph/ceph.client.admin.keyring\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:2", "Monday 20 August 2018 06:31:33 -0400 (0:00:00.677) 0:02:14.555 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options 'ceph_disk_cli_options'] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:11", "Monday 20 August 2018 06:31:33 -0400 (0:00:00.043) 0:02:14.599 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph'] **************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:20", "Monday 20 August 2018 06:31:33 -0400 (0:00:00.052) 0:02:14.651 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --bluestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:29", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.053) 0:02:14.705 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --filestore --dmcrypt'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:38", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.042) 0:02:14.748 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact ceph_disk_cli_options '--cluster ceph --dmcrypt'] ****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:47", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.051) 0:02:14.800 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e KV_TYPE=etcd -e KV_IP=127.0.0.1 -e KV_PORT=2379'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:56", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.046) 0:02:14.847 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:62", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.037) 0:02:14.884 ********* ", "ok: [ceph-0] => {\"ansible_facts\": {\"docker_env_args\": \"-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0\"}, \"changed\": false}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:70", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.081) 0:02:14.966 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:78", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.048) 0:02:15.015 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact docker_env_args '-e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=1'] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/ceph_disk_cli_options_facts.yml:86", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.044) 0:02:15.060 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact devices generate device list when osd_auto_discovery] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:2", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.047) 0:02:15.108 ********* ", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'20971520', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {u'vda1': {u'sectorsize': 512, u'uuid': u'2018-08-20-06-12-42-00', u'links': {u'masters': [], u'labels': [u'config-2'], u'ids': [], u'uuids': [u'2018-08-20-06-12-42-00']}, u'sectors': u'2048', u'start': u'2048', u'holders': [], u'size': u'1.00 MB'}, u'vda2': {u'sectorsize': 512, u'uuid': u'7fbefd08-62e0-41fb-b85e-19cd4d681773', u'links': {u'masters': [], u'labels': [u'img-rootfs'], u'ids': [], u'uuids': [u'7fbefd08-62e0-41fb-b85e-19cd4d681773']}, u'sectors': u'20967391', u'start': u'4096', u'holders': [], u'size': u'10.00 GB'}}, u'holders': [], u'size': u'10.00 GB'}, 'key': u'vda'}) => {\"changed\": false, \"item\": {\"key\": \"vda\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {\"vda1\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"config-2\"], \"masters\": [], \"uuids\": [\"2018-08-20-06-12-42-00\"]}, \"sectors\": \"2048\", \"sectorsize\": 512, \"size\": \"1.00 MB\", \"start\": \"2048\", \"uuid\": \"2018-08-20-06-12-42-00\"}, \"vda2\": {\"holders\": [], \"links\": {\"ids\": [], \"labels\": [\"img-rootfs\"], \"masters\": [], \"uuids\": [\"7fbefd08-62e0-41fb-b85e-19cd4d681773\"]}, \"sectors\": \"20967391\", \"sectorsize\": 512, \"size\": \"10.00 GB\", \"start\": \"4096\", \"uuid\": \"7fbefd08-62e0-41fb-b85e-19cd4d681773\"}}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"20971520\", \"sectorsize\": \"512\", \"size\": \"10.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdc'}) => {\"changed\": false, \"item\": {\"key\": \"vdc\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdb'}) => {\"changed\": false, \"item\": {\"key\": \"vdb\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vde'}) => {\"changed\": false, \"item\": {\"key\": \"vde\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdd'}) => {\"changed\": false, \"item\": {\"key\": \"vdd\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={'value': {u'scheduler_mode': u'mq-deadline', u'rotational': u'1', u'vendor': u'0x1af4', u'sectors': u'23068672', u'links': {u'masters': [], u'labels': [], u'ids': [], u'uuids': []}, u'sas_device_handle': None, u'sas_address': None, u'virtual': 1, u'host': u'SCSI storage controller: Red Hat, Inc. Virtio block device', u'sectorsize': u'512', u'removable': u'0', u'support_discard': u'0', u'model': None, u'partitions': {}, u'holders': [], u'size': u'11.00 GB'}, 'key': u'vdf'}) => {\"changed\": false, \"item\": {\"key\": \"vdf\", \"value\": {\"holders\": [], \"host\": \"SCSI storage controller: Red Hat, Inc. Virtio block device\", \"links\": {\"ids\": [], \"labels\": [], \"masters\": [], \"uuids\": []}, \"model\": null, \"partitions\": {}, \"removable\": \"0\", \"rotational\": \"1\", \"sas_address\": null, \"sas_device_handle\": null, \"scheduler_mode\": \"mq-deadline\", \"sectors\": \"23068672\", \"sectorsize\": \"512\", \"size\": \"11.00 GB\", \"support_discard\": \"0\", \"vendor\": \"0x1af4\", \"virtual\": 1}}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : resolve dedicated device link(s)] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:15", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.097) 0:02:15.205 ********* ", "", "TASK [ceph-osd : set_fact build dedicated_devices from resolved symlinks] ******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:24", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.045) 0:02:15.251 ********* ", "", "TASK [ceph-osd : set_fact build final dedicated_devices list] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/build_devices.yml:32", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.043) 0:02:15.294 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : read information about the devices] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:29", "Monday 20 August 2018 06:31:34 -0400 (0:00:00.043) 0:02:15.337 ********* ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "ok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}", "", "TASK [ceph-osd : check the partition status of the osd disks] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:2", "Monday 20 August 2018 06:31:35 -0400 (0:00:00.995) 0:02:16.333 ********* ", "ok: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007689\", \"end\": \"2018-08-20 10:31:35.856259\", \"failed_when_result\": false, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:35.848570\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.007042\", \"end\": \"2018-08-20 10:31:36.012783\", \"failed_when_result\": false, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.005741\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006523\", \"end\": \"2018-08-20 10:31:36.167965\", \"failed_when_result\": false, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.161442\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.007191\", \"end\": \"2018-08-20 10:31:36.321282\", \"failed_when_result\": false, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.314091\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.007161\", \"end\": \"2018-08-20 10:31:36.467284\", \"failed_when_result\": false, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.460123\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : create gpt disk label] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/check_gpt.yml:11", "Monday 20 August 2018 06:31:36 -0400 (0:00:00.832) 0:02:17.166 ********* ", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdb'], u'end': u'2018-08-20 10:31:35.856259', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdb', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdb', u'delta': u'0:00:00.007689', '_ansible_item_label': u'/dev/vdb', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:35.848570', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdb']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdb\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.011701\", \"end\": \"2018-08-20 10:31:36.716464\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdb\"], \"delta\": \"0:00:00.007689\", \"end\": \"2018-08-20 10:31:35.856259\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdb\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdb\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:35.848570\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:36.704763\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdc'], u'end': u'2018-08-20 10:31:36.012783', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdc', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdc', u'delta': u'0:00:00.007042', '_ansible_item_label': u'/dev/vdc', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:36.005741', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdc']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdc\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.009932\", \"end\": \"2018-08-20 10:31:36.895035\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdc\"], \"delta\": \"0:00:00.007042\", \"end\": \"2018-08-20 10:31:36.012783\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdc\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdc\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.005741\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:36.885103\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdd'], u'end': u'2018-08-20 10:31:36.167965', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdd', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdd', u'delta': u'0:00:00.006523', '_ansible_item_label': u'/dev/vdd', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:36.161442', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdd']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdd\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.008962\", \"end\": \"2018-08-20 10:31:37.073108\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdd\"], \"delta\": \"0:00:00.006523\", \"end\": \"2018-08-20 10:31:36.167965\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdd\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdd\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.161442\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:37.064146\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vde'], u'end': u'2018-08-20 10:31:36.321282', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vde', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vde', u'delta': u'0:00:00.007191', '_ansible_item_label': u'/dev/vde', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:36.314091', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vde']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vde\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.006835\", \"end\": \"2018-08-20 10:31:37.232430\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vde\"], \"delta\": \"0:00:00.007191\", \"end\": \"2018-08-20 10:31:36.321282\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vde\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vde\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.314091\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:37.225595\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "ok: [ceph-0] => (item=[{'_ansible_parsed': True, 'stderr_lines': [], u'cmd': [u'blkid', u'-t', u'PTTYPE=gpt', u'/dev/vdf'], u'end': u'2018-08-20 10:31:36.467284', '_ansible_no_log': False, u'stdout': u'', '_ansible_item_result': True, u'changed': False, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'blkid -t PTTYPE=\"gpt\" /dev/vdf', u'removes': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'item': u'/dev/vdf', u'delta': u'0:00:00.007161', '_ansible_item_label': u'/dev/vdf', u'stderr': u'', u'rc': 2, u'msg': u'non-zero return code', 'stdout_lines': [], 'failed_when_result': False, u'start': u'2018-08-20 10:31:36.460123', '_ansible_ignore_errors': None, u'failed': False}, u'/dev/vdf']) => {\"changed\": false, \"cmd\": [\"parted\", \"-s\", \"/dev/vdf\", \"mklabel\", \"gpt\"], \"delta\": \"0:00:00.006684\", \"end\": \"2018-08-20 10:31:37.388601\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"cmd\": [\"blkid\", \"-t\", \"PTTYPE=gpt\", \"/dev/vdf\"], \"delta\": \"0:00:00.007161\", \"end\": \"2018-08-20 10:31:36.467284\", \"failed\": false, \"failed_when_result\": false, \"invocation\": {\"module_args\": {\"_raw_params\": \"blkid -t PTTYPE=\\\"gpt\\\" /dev/vdf\", \"_uses_shell\": false, \"chdir\": null, \"creates\": null, \"executable\": null, \"removes\": null, \"stdin\": null, \"warn\": true}}, \"item\": \"/dev/vdf\", \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:31:36.460123\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:37.381917\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : include scenarios/collocated.yml] *****************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:41", "Monday 20 August 2018 06:31:37 -0400 (0:00:00.933) 0:02:18.099 ********* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml for ceph-0", "", "TASK [ceph-osd : prepare ceph containerized osd disk collocated] ***************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:5", "Monday 20 August 2018 06:31:37 -0400 (0:00:00.096) 0:02:18.196 ********* ", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdb -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdb -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.526070\", \"end\": \"2018-08-20 10:31:44.256951\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:37.730881\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:37'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:31:37 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid e4536f11-dd7a-409d-aa66-ee7ff961b6b2 /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdb\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:e4536f11-dd7a-409d-aa66-ee7ff961b6b2 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdb\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdb\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:05d3f79b-203c-4ff3-a357-964440c16877 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\\nupdate_partition: Calling partprobe on created device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdb1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\\nmount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.Zzy7DS with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.Zzy7DS\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Zzy7DS\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Zzy7DS\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/ceph_fsid.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/ceph_fsid.19072.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/fsid.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/fsid.19072.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/magic.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/magic.19072.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/journal_uuid.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/journal_uuid.19072.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Zzy7DS/journal -> /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/type.19072.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/type.19072.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.Zzy7DS\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Zzy7DS\\nget_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\\nupdate_partition: Calling partprobe on prepared device /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\\n++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdb1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:37'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:31:37 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdb ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdb ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdb print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid e4536f11-dd7a-409d-aa66-ee7ff961b6b2 /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdb\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:e4536f11-dd7a-409d-aa66-ee7ff961b6b2 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdb\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdb\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:05d3f79b-203c-4ff3-a357-964440c16877 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb\", \"update_partition: Calling partprobe on created device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdb1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdb1\", \"mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.Zzy7DS with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdb1 /var/lib/ceph/tmp/mnt.Zzy7DS\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.Zzy7DS\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.Zzy7DS\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/ceph_fsid.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/ceph_fsid.19072.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/fsid.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/fsid.19072.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/magic.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/magic.19072.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/journal_uuid.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/journal_uuid.19072.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.Zzy7DS/journal -> /dev/disk/by-partuuid/e4536f11-dd7a-409d-aa66-ee7ff961b6b2\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS/type.19072.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS/type.19072.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.Zzy7DS\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.Zzy7DS\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.Zzy7DS\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.Zzy7DS\", \"get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/252:16/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb\", \"update_partition: Calling partprobe on prepared device /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdb /usr/sbin/partprobe /dev/vdb\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb2 ]; do echo '\\\\''Waiting for /dev/vdb2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdb1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdb1 ]; do echo '\\\\''Waiting for /dev/vdb1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdb 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdb\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdb ]]\", \"++common_functions.sh:124: dev_part(): [[ b == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdb1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdb1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:31:37 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:31:37 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:31:37 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:31:37 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdb\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\\n2018-08-20 10:31:37 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdb2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdb1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:31:37 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:31:37 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:31:37 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:31:37 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdb\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mon/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/mds/ceph-ceph-0' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rgw' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-mds' from 64045:64045 to ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/bootstrap-rbd' from 64045:64045 to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/mgr/ceph-ceph-0' from root:root to ceph:ceph\", \"2018-08-20 10:31:37 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdb1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdb2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdb1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdc -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdc -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.363569\", \"end\": \"2018-08-20 10:31:50.777600\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:44.414031\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:44'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:31:44 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 424551cb-046e-4505-a66a-438dfc9d8634 /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdc\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:424551cb-046e-4505-a66a-438dfc9d8634 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdc\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdc\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:51d3baa7-0bd1-40d9-aba5-61e421e4e282 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\\nupdate_partition: Calling partprobe on created device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdc1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\\nmount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.1LWkaV with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.1LWkaV\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.1LWkaV\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.1LWkaV\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/ceph_fsid.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/ceph_fsid.19336.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/fsid.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/fsid.19336.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/magic.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/magic.19336.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/journal_uuid.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/journal_uuid.19336.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.1LWkaV/journal -> /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/type.19336.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/type.19336.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.1LWkaV\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.1LWkaV\\nget_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\\nupdate_partition: Calling partprobe on prepared device /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\\n++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdc1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:44'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:31:44 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdc ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdc ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdc print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 424551cb-046e-4505-a66a-438dfc9d8634 /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdc\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:424551cb-046e-4505-a66a-438dfc9d8634 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc2 uuid path is /sys/dev/block/252:34/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdc\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdc\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:51d3baa7-0bd1-40d9-aba5-61e421e4e282 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdc\", \"update_partition: Calling partprobe on created device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdc1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdc1\", \"mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.1LWkaV with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdc1 /var/lib/ceph/tmp/mnt.1LWkaV\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.1LWkaV\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.1LWkaV\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/ceph_fsid.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/ceph_fsid.19336.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/fsid.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/fsid.19336.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/magic.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/magic.19336.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/journal_uuid.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/journal_uuid.19336.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.1LWkaV/journal -> /dev/disk/by-partuuid/424551cb-046e-4505-a66a-438dfc9d8634\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV/type.19336.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV/type.19336.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.1LWkaV\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.1LWkaV\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.1LWkaV\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.1LWkaV\", \"get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdc\", \"update_partition: Calling partprobe on prepared device /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdc1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc2 ]; do echo '\\\\''Waiting for /dev/vdc2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdc1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdc1 ]; do echo '\\\\''Waiting for /dev/vdc1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdc 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdc\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdc ]]\", \"++common_functions.sh:124: dev_part(): [[ c == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdc1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdc1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:31:44 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:31:44 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:31:44 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:31:44 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdc\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-08-20 10:31:44 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdc2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdc1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:31:44 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:31:44 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:31:44 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:31:44 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdc\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' from root:root to ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-08-20 10:31:44 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdc2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdc1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdd -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdd -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.374018\", \"end\": \"2018-08-20 10:31:57.322381\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:50.948363\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:51'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:31:51 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 252b6b36-a52a-4a4f-820c-362379283e95 /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdd\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:252b6b36-a52a-4a4f-820c-362379283e95 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdd\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdd\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:cafd64c3-82a1-4313-b1a3-a1926402114d --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\\nupdate_partition: Calling partprobe on created device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdd1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\\nmount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.HvU_2j with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.HvU_2j\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.HvU_2j\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.HvU_2j\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/ceph_fsid.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/ceph_fsid.19592.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/fsid.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/fsid.19592.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/magic.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/magic.19592.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/journal_uuid.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/journal_uuid.19592.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.HvU_2j/journal -> /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/type.19592.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/type.19592.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.HvU_2j\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.HvU_2j\\nget_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\\nupdate_partition: Calling partprobe on prepared device /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\\n++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdd1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:51'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:31:51 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdd ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdd ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdd print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 252b6b36-a52a-4a4f-820c-362379283e95 /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdd\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:252b6b36-a52a-4a4f-820c-362379283e95 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd2 uuid path is /sys/dev/block/252:50/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdd\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdd\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:cafd64c3-82a1-4313-b1a3-a1926402114d --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdd\", \"update_partition: Calling partprobe on created device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdd1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdd1\", \"mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.HvU_2j with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdd1 /var/lib/ceph/tmp/mnt.HvU_2j\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.HvU_2j\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.HvU_2j\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/ceph_fsid.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/ceph_fsid.19592.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/fsid.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/fsid.19592.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/magic.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/magic.19592.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/journal_uuid.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/journal_uuid.19592.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.HvU_2j/journal -> /dev/disk/by-partuuid/252b6b36-a52a-4a4f-820c-362379283e95\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j/type.19592.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j/type.19592.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.HvU_2j\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HvU_2j\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.HvU_2j\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.HvU_2j\", \"get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdd\", \"update_partition: Calling partprobe on prepared device /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdd /usr/sbin/partprobe /dev/vdd\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdd1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd2 ]; do echo '\\\\''Waiting for /dev/vdd2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdd1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdd1 ]; do echo '\\\\''Waiting for /dev/vdd1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdd 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdd\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdd ]]\", \"++common_functions.sh:124: dev_part(): [[ d == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdd1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdd1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:31:51 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:31:51 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:31:51 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:31:51 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdd\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-08-20 10:31:51 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdd2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdd1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:31:51 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:31:51 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:31:51 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:31:51 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdd\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-08-20 10:31:51 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdd2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdd1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vde -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vde -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.558030\", \"end\": \"2018-08-20 10:32:04.057449\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"rc\": 0, \"start\": \"2018-08-20 10:31:57.499419\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:57'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:31:57 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 3962bf57-ff8b-4c96-ae23-ec662ba06977 /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_type: Will colocate journal with data on /dev/vde\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:3962bf57-ff8b-4c96-ae23-ec662ba06977 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vde\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vde\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:951e27f1-a8eb-4e7c-8d54-e78da591a6b7 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\\nupdate_partition: Calling partprobe on created device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vde1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\\nmount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.shxyGX with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.shxyGX\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.shxyGX\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.shxyGX\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/ceph_fsid.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/ceph_fsid.19853.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/fsid.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/fsid.19853.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/magic.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/magic.19853.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/journal_uuid.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/journal_uuid.19853.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.shxyGX/journal -> /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/type.19853.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/type.19853.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.shxyGX\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.shxyGX\\nget_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\\nupdate_partition: Calling partprobe on prepared device /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\\n++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vde1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:31:57'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:31:57 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vde ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vde ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vde print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid 3962bf57-ff8b-4c96-ae23-ec662ba06977 /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vde\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:3962bf57-ff8b-4c96-ae23-ec662ba06977 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde2 uuid path is /sys/dev/block/252:66/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vde\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vde\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:951e27f1-a8eb-4e7c-8d54-e78da591a6b7 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vde\", \"update_partition: Calling partprobe on created device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vde1 uuid path is /sys/dev/block/252:65/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vde1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vde1\", \"mount: Mounting /dev/vde1 on /var/lib/ceph/tmp/mnt.shxyGX with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vde1 /var/lib/ceph/tmp/mnt.shxyGX\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.shxyGX\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.shxyGX\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/ceph_fsid.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/ceph_fsid.19853.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/fsid.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/fsid.19853.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/magic.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/magic.19853.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/journal_uuid.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/journal_uuid.19853.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.shxyGX/journal -> /dev/disk/by-partuuid/3962bf57-ff8b-4c96-ae23-ec662ba06977\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX/type.19853.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX/type.19853.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.shxyGX\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.shxyGX\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.shxyGX\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.shxyGX\", \"get_dm_uuid: get_dm_uuid /dev/vde uuid path is /sys/dev/block/252:64/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vde\", \"update_partition: Calling partprobe on prepared device /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vde /usr/sbin/partprobe /dev/vde\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vde1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde2 ]; do echo '\\\\''Waiting for /dev/vde2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vde 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vde1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vde1 ]; do echo '\\\\''Waiting for /dev/vde1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vde 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vde\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vde ]]\", \"++common_functions.sh:124: dev_part(): [[ e == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vde1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vde1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:31:57 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:31:57 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:31:57 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:31:57 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vde\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.uVYdndgfid' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-08-20 10:31:57 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vde2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vde1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:31:57 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:31:57 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:31:57 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:31:57 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vde\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.uVYdndgfid' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-08-20 10:31:57 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vde1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vde2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vde1' from root:disk to ceph:ceph\"]}", "changed: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": true, \"cmd\": \"docker run --net=host --pid=host --privileged=true --name=ceph-osd-prepare-ceph-0-vdf -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /dev:/dev -v /etc/localtime:/etc/localtime:ro -e DEBUG=verbose -e CLUSTER=ceph -e CEPH_DAEMON=OSD_CEPH_DISK_PREPARE -e OSD_DEVICE=/dev/vdf -e OSD_BLUESTORE=0 -e OSD_FILESTORE=1 -e OSD_DMCRYPT=0 -e OSD_JOURNAL_SIZE=512 192.168.24.1:8787/rhceph:3-11\", \"delta\": \"0:00:06.552197\", \"end\": \"2018-08-20 10:32:10.770316\", \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"rc\": 0, \"start\": \"2018-08-20 10:32:04.218119\", \"stderr\": \"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\\n+/entrypoint.sh:26: source /config.static.sh\\n++/config.static.sh:2: set -e\\n++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\\n++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\\n+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\\n+/entrypoint.sh:38: create_mandatory_directories\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\\n+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\\n++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\\n+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\\n+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\\n+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\\n+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\\n+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\\n+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\\n+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\\n+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\\n+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\\n+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\\n+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\\n+/entrypoint.sh:78: source start_osd.sh\\n++start_osd.sh:2: set -e\\n++start_osd.sh:4: is_redhat\\n++common_functions.sh:211: is_redhat(): get_package_manager\\n++common_functions.sh:196: get_package_manager(): is_available rpm\\n++common_functions.sh:47: is_available(): command -v rpm\\n++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\\n++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\\n++start_osd.sh:5: source /etc/sysconfig/ceph\\n+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\\n+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\\n+/entrypoint.sh:79: OSD_TYPE=prepare\\n+/entrypoint.sh:80: start_osd\\n+start_osd.sh:11: start_osd(): get_config\\n+/config.static.sh:114: get_config(): log 'static: does not generate config'\\n+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\\n+common_functions.sh:11: log(): local timestamp\\n++common_functions.sh:12: log(): date '+%F %T'\\n+common_functions.sh:12: log(): timestamp='2018-08-20 10:32:04'\\n+common_functions.sh:13: log(): echo '2018-08-20 10:32:04 /entrypoint.sh: static: does not generate config'\\n+common_functions.sh:14: log(): return 0\\n+start_osd.sh:12: start_osd(): check_config\\n+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\\n+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\\n+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\\n+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\\n++osd_disk_prepare.sh:2: source(): set -e\\n+start_osd.sh:34: start_osd(): osd_disk_prepare\\n+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\\n+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\\n+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\\n+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\\n+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\\n+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\\n+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\\n+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\\n+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\\n+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\\n+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\\n+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid a0b92a62-97d2-44e1-9c14-6c834bffed36 /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\\ncommand: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\ncommand: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_type: Will colocate journal with data on /dev/vdf\\ncommand: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\\ncommand: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = journal\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating journal partition num 2 size 512 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:a0b92a62-97d2-44e1-9c14-6c834bffed36 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nprepare_device: Journal is GPT partition /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nset_data_partition: Creating osd partition on /dev/vdf\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nptype_tobe_for_name: name = data\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncreate_partition: Creating data partition num 1 size 0 on /dev/vdf\\ncommand_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:9f041355-3e8c-4398-9922-f4b1641b83aa --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\\nupdate_partition: Calling partprobe on created device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\nget_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\\npopulate_data_path_device: Creating xfs fs on /dev/vdf1\\ncommand_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\\nmount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.ORoqlF with options noatime,largeio,inode64,swalloc\\ncommand_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.ORoqlF\\ncommand: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.ORoqlF\\npopulate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.ORoqlF\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/ceph_fsid.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/ceph_fsid.20113.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/fsid.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/fsid.20113.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/magic.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/magic.20113.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/journal_uuid.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/journal_uuid.20113.tmp\\nadjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.ORoqlF/journal -> /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/type.20113.tmp\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/type.20113.tmp\\ncommand: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF\\ncommand: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF\\nunmount: Unmounting /var/lib/ceph/tmp/mnt.ORoqlF\\ncommand_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.ORoqlF\\nget_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\\ncommand_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\\nupdate_partition: Calling partprobe on prepared device /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\\ncommand_check_call: Running command: /usr/bin/udevadm settle --timeout=600\\ncommand_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\\n+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\\n+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\\n+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\\n+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\\n+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\\n+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\\n++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=2\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf2\\n+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\\n++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\\n+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\\n++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\\n++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\\n++common_functions.sh:90: dev_part(): local osd_partition=1\\n++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\\n++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\\n++common_functions.sh:127: dev_part(): echo /dev/vdf1\\n+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\\n+/entrypoint.sh:189: exit 0\", \"stderr_lines\": [\"+/entrypoint.sh:16: case \\\"$KV_TYPE\\\" in\", \"+/entrypoint.sh:26: source /config.static.sh\", \"++/config.static.sh:2: set -e\", \"++/entrypoint.sh:36: to_lowercase OSD_CEPH_DISK_PREPARE\", \"++common_functions.sh:178: to_lowercase(): echo osd_ceph_disk_prepare\", \"+/entrypoint.sh:36: CEPH_DAEMON=osd_ceph_disk_prepare\", \"+/entrypoint.sh:38: create_mandatory_directories\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-osd\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-mds/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-mds\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rgw/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rgw\", \"+common_functions.sh:53: create_mandatory_directories(): for keyring in '$OSD_BOOTSTRAP_KEYRING' '$MDS_BOOTSTRAP_KEYRING' '$RGW_BOOTSTRAP_KEYRING' '$RBD_MIRROR_BOOTSTRAP_KEYRING'\", \"++common_functions.sh:54: create_mandatory_directories(): dirname /var/lib/ceph/bootstrap-rbd/ceph.keyring\", \"+common_functions.sh:54: create_mandatory_directories(): mkdir -p /var/lib/ceph/bootstrap-rbd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/osd\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/tmp\", \"+common_functions.sh:58: create_mandatory_directories(): for directory in mon osd mds radosgw tmp mgr\", \"+common_functions.sh:59: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr\", \"+common_functions.sh:63: create_mandatory_directories(): mkdir -p /var/lib/ceph/mon/ceph-ceph-0\", \"+common_functions.sh:66: create_mandatory_directories(): mkdir -p /var/run/ceph\", \"+common_functions.sh:69: create_mandatory_directories(): mkdir -p /var/lib/ceph/radosgw/ceph-rgw.ceph-0\", \"+common_functions.sh:72: create_mandatory_directories(): mkdir -p /var/lib/ceph/mds/ceph-ceph-0\", \"+common_functions.sh:75: create_mandatory_directories(): mkdir -p /var/lib/ceph/mgr/ceph-ceph-0\", \"+common_functions.sh:78: create_mandatory_directories(): chown --verbose -R ceph. /var/run/ceph/\", \"+common_functions.sh:79: create_mandatory_directories(): find -L /var/lib/ceph/ -mindepth 1 -maxdepth 3 -exec chown --verbose ceph. '{}' ';'\", \"+/entrypoint.sh:42: case \\\"$CEPH_DAEMON\\\" in\", \"+/entrypoint.sh:78: source start_osd.sh\", \"++start_osd.sh:2: set -e\", \"++start_osd.sh:4: is_redhat\", \"++common_functions.sh:211: is_redhat(): get_package_manager\", \"++common_functions.sh:196: get_package_manager(): is_available rpm\", \"++common_functions.sh:47: is_available(): command -v rpm\", \"++common_functions.sh:197: get_package_manager(): OS_VENDOR=redhat\", \"++common_functions.sh:212: is_redhat(): [[ redhat == \\\\r\\\\e\\\\d\\\\h\\\\a\\\\t ]]\", \"++start_osd.sh:5: source /etc/sysconfig/ceph\", \"+++/etc/sysconfig/ceph:7: TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728\", \"+++/etc/sysconfig/ceph:18: CEPH_AUTO_RESTART_ON_UPGRADE=no\", \"+/entrypoint.sh:79: OSD_TYPE=prepare\", \"+/entrypoint.sh:80: start_osd\", \"+start_osd.sh:11: start_osd(): get_config\", \"+/config.static.sh:114: get_config(): log 'static: does not generate config'\", \"+common_functions.sh:7: log(): '[' -z 'static: does not generate config' ']'\", \"+common_functions.sh:11: log(): local timestamp\", \"++common_functions.sh:12: log(): date '+%F %T'\", \"+common_functions.sh:12: log(): timestamp='2018-08-20 10:32:04'\", \"+common_functions.sh:13: log(): echo '2018-08-20 10:32:04 /entrypoint.sh: static: does not generate config'\", \"+common_functions.sh:14: log(): return 0\", \"+start_osd.sh:12: start_osd(): check_config\", \"+common_functions.sh:19: check_config(): [[ ! -e /etc/ceph/ceph.conf ]]\", \"+start_osd.sh:14: start_osd(): '[' 0 -eq 1 ']'\", \"+start_osd.sh:19: start_osd(): case \\\"$OSD_TYPE\\\" in\", \"+start_osd.sh:33: start_osd(): source osd_disk_prepare.sh\", \"++osd_disk_prepare.sh:2: source(): set -e\", \"+start_osd.sh:34: start_osd(): osd_disk_prepare\", \"+osd_disk_prepare.sh:5: osd_disk_prepare(): [[ -z /dev/vdf ]]\", \"+osd_disk_prepare.sh:10: osd_disk_prepare(): [[ ! -e /dev/vdf ]]\", \"+osd_disk_prepare.sh:15: osd_disk_prepare(): '[' '!' -e /var/lib/ceph/bootstrap-osd/ceph.keyring ']'\", \"+osd_disk_prepare.sh:20: osd_disk_prepare(): ceph_health client.bootstrap-osd /var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:318: ceph_health(): local bootstrap_user=client.bootstrap-osd\", \"+common_functions.sh:319: ceph_health(): local bootstrap_key=/var/lib/ceph/bootstrap-osd/ceph.keyring\", \"+common_functions.sh:321: ceph_health(): timeout 10 ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring health\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): parted --script /dev/vdf print\", \"+osd_disk_prepare.sh:23: osd_disk_prepare(): grep -qE '^ 1.*ceph data'\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): IFS=' '\", \"+osd_disk_prepare.sh:30: osd_disk_prepare(): read -r -a CEPH_DISK_CLI_OPTS\", \"+osd_disk_prepare.sh:31: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:38: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:47: osd_disk_prepare(): [[ 1 -eq 1 ]]\", \"+osd_disk_prepare.sh:48: osd_disk_prepare(): CEPH_DISK_CLI_OPTS+=(--filestore)\", \"+osd_disk_prepare.sh:49: osd_disk_prepare(): [[ -n '' ]]\", \"+osd_disk_prepare.sh:52: osd_disk_prepare(): ceph-disk -v prepare --cluster ceph --filestore --journal-uuid a0b92a62-97d2-44e1-9c14-6c834bffed36 /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid\", \"command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_type: Will colocate journal with data on /dev/vdf\", \"command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs\", \"command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = journal\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating journal partition num 2 size 512 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+512M --change-name=2:ceph journal --partition-guid=2:a0b92a62-97d2-44e1-9c14-6c834bffed36 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf2 uuid path is /sys/dev/block/252:82/dm/uuid\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"prepare_device: Journal is GPT partition /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"set_data_partition: Creating osd partition on /dev/vdf\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"ptype_tobe_for_name: name = data\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"create_partition: Creating data partition num 1 size 0 on /dev/vdf\", \"command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:9f041355-3e8c-4398-9922-f4b1641b83aa --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdf\", \"update_partition: Calling partprobe on created device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"get_dm_uuid: get_dm_uuid /dev/vdf1 uuid path is /sys/dev/block/252:81/dm/uuid\", \"populate_data_path_device: Creating xfs fs on /dev/vdf1\", \"command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -f -- /dev/vdf1\", \"mount: Mounting /dev/vdf1 on /var/lib/ceph/tmp/mnt.ORoqlF with options noatime,largeio,inode64,swalloc\", \"command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,largeio,inode64,swalloc -- /dev/vdf1 /var/lib/ceph/tmp/mnt.ORoqlF\", \"command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.ORoqlF\", \"populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.ORoqlF\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/ceph_fsid.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/ceph_fsid.20113.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/fsid.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/fsid.20113.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/magic.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/magic.20113.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/journal_uuid.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/journal_uuid.20113.tmp\", \"adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.ORoqlF/journal -> /dev/disk/by-partuuid/a0b92a62-97d2-44e1-9c14-6c834bffed36\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF/type.20113.tmp\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF/type.20113.tmp\", \"command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.ORoqlF\", \"command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.ORoqlF\", \"unmount: Unmounting /var/lib/ceph/tmp/mnt.ORoqlF\", \"command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.ORoqlF\", \"get_dm_uuid: get_dm_uuid /dev/vdf uuid path is /sys/dev/block/252:80/dm/uuid\", \"command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdf\", \"update_partition: Calling partprobe on prepared device /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command: Running command: /usr/bin/flock -s /dev/vdf /usr/sbin/partprobe /dev/vdf\", \"command_check_call: Running command: /usr/bin/udevadm settle --timeout=600\", \"command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdf1\", \"+osd_disk_prepare.sh:56: osd_disk_prepare(): [[ 0 -eq 1 ]]\", \"+osd_disk_prepare.sh:75: osd_disk_prepare(): udevadm settle --timeout=600\", \"+osd_disk_prepare.sh:77: osd_disk_prepare(): apply_ceph_ownership_to_disks\", \"+common_functions.sh:265: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:274: apply_ceph_ownership_to_disks(): [[ 0 -eq 1 ]]\", \"+common_functions.sh:287: apply_ceph_ownership_to_disks(): [[ 1 -eq 1 ]]\", \"+common_functions.sh:288: apply_ceph_ownership_to_disks(): [[ -n '' ]]\", \"++common_functions.sh:292: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:292: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf2\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf2 ]; do echo '\\\\''Waiting for /dev/vdf2 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:293: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 2\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=2\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf2\", \"+common_functions.sh:293: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf2\", \"++common_functions.sh:296: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:296: apply_ceph_ownership_to_disks(): wait_for_file /dev/vdf1\", \"+common_functions.sh:217: wait_for_file(): timeout 10 bash -c 'while [ ! -e /dev/vdf1 ]; do echo '\\\\''Waiting for /dev/vdf1 to show up'\\\\'' && sleep 1 ; done'\", \"++common_functions.sh:297: apply_ceph_ownership_to_disks(): dev_part /dev/vdf 1\", \"++common_functions.sh:89: dev_part(): local osd_device=/dev/vdf\", \"++common_functions.sh:90: dev_part(): local osd_partition=1\", \"++common_functions.sh:92: dev_part(): [[ -L /dev/vdf ]]\", \"++common_functions.sh:124: dev_part(): [[ f == [0-9] ]]\", \"++common_functions.sh:127: dev_part(): echo /dev/vdf1\", \"+common_functions.sh:297: apply_ceph_ownership_to_disks(): chown --verbose ceph. /dev/vdf1\", \"+/entrypoint.sh:189: exit 0\"], \"stdout\": \"2018-08-20 10:32:04 /entrypoint.sh: VERBOSE: activating bash debugging mode.\\n2018-08-20 10:32:04 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\\n2018-08-20 10:32:04 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\\n2018-08-20 10:32:04 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\\nOSD_DEVICE=/dev/vdf\\nLC_ALL=C\\nOSD_BLUESTORE=0\\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\\nOSD_JOURNAL_SIZE=512\\nPWD=/\\nCEPH_VERSION=luminous\\nSHLVL=1\\nHOME=/root\\nCEPH_POINT_RELEASE=\\nCLUSTER=ceph\\nOSD_DMCRYPT=0\\nCEPH_DAEMON=OSD_CEPH_DISK_PREPARE\\ncontainer=oci\\nDEBUG=verbose\\nOSD_FILESTORE=1\\n_=/usr/bin/env\\nownership of '/var/run/ceph/' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon' retained as ceph:ceph\\nownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' retained as ceph:ceph\\nownership of '/var/lib/ceph/tmp/tmp.uVYdndgfid' retained as ceph:ceph\\nchanged ownership of '/var/lib/ceph/tmp/tmp.BUGam3YbjO' from root:root to ceph:ceph\\nownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\\nownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr' retained as ceph:ceph\\nownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\\n2018-08-20 10:32:04 /entrypoint.sh: static: does not generate config\\nHEALTH_OK\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nThe operation has completed successfully.\\nmeta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\\n = sectsz=512 attr=2, projid32bit=1\\n = crc=1 finobt=0, sparse=0\\ndata = bsize=4096 blocks=2752251, imaxpct=25\\n = sunit=0 swidth=0 blks\\nnaming =version 2 bsize=4096 ascii-ci=0 ftype=1\\nlog =internal log bsize=4096 blocks=2560, version=2\\n = sectsz=512 sunit=0 blks, lazy-count=1\\nrealtime =none extsz=4096 blocks=0, rtextents=0\\nThe operation has completed successfully.\\nchanged ownership of '/dev/vdf2' from root:disk to ceph:ceph\\nchanged ownership of '/dev/vdf1' from root:disk to ceph:ceph\", \"stdout_lines\": [\"2018-08-20 10:32:04 /entrypoint.sh: VERBOSE: activating bash debugging mode.\", \"2018-08-20 10:32:04 /entrypoint.sh: To run Ceph daemons in debugging mode, pass the CEPH_ARGS variable like this:\", \"2018-08-20 10:32:04 /entrypoint.sh: -e CEPH_ARGS='--debug-ms 1 --debug-osd 10'\", \"2018-08-20 10:32:04 /entrypoint.sh: This container environement variables are: HOSTNAME=ceph-0\", \"OSD_DEVICE=/dev/vdf\", \"LC_ALL=C\", \"OSD_BLUESTORE=0\", \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\", \"OSD_JOURNAL_SIZE=512\", \"PWD=/\", \"CEPH_VERSION=luminous\", \"SHLVL=1\", \"HOME=/root\", \"CEPH_POINT_RELEASE=\", \"CLUSTER=ceph\", \"OSD_DMCRYPT=0\", \"CEPH_DAEMON=OSD_CEPH_DISK_PREPARE\", \"container=oci\", \"DEBUG=verbose\", \"OSD_FILESTORE=1\", \"_=/usr/bin/env\", \"ownership of '/var/run/ceph/' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mon/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mds/ceph-ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.nlUX44Eda5' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/ceph-disk.prepare.lock' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.lNGRuyzONC' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.8xLmHmWXKf' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/tmp/tmp.uVYdndgfid' retained as ceph:ceph\", \"changed ownership of '/var/lib/ceph/tmp/tmp.BUGam3YbjO' from root:root to ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/radosgw/ceph-rgw.ceph-0' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rgw' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-mds' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-osd/ceph.keyring' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/bootstrap-rbd' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr' retained as ceph:ceph\", \"ownership of '/var/lib/ceph/mgr/ceph-ceph-0' retained as ceph:ceph\", \"2018-08-20 10:32:04 /entrypoint.sh: static: does not generate config\", \"HEALTH_OK\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"The operation has completed successfully.\", \"meta-data=/dev/vdf1 isize=2048 agcount=4, agsize=688063 blks\", \" = sectsz=512 attr=2, projid32bit=1\", \" = crc=1 finobt=0, sparse=0\", \"data = bsize=4096 blocks=2752251, imaxpct=25\", \" = sunit=0 swidth=0 blks\", \"naming =version 2 bsize=4096 ascii-ci=0 ftype=1\", \"log =internal log bsize=4096 blocks=2560, version=2\", \" = sectsz=512 sunit=0 blks, lazy-count=1\", \"realtime =none extsz=4096 blocks=0, rtextents=0\", \"The operation has completed successfully.\", \"changed ownership of '/dev/vdf2' from root:disk to ceph:ceph\", \"changed ownership of '/dev/vdf1' from root:disk to ceph:ceph\"]}", "", "TASK [ceph-osd : automatic prepare ceph containerized osd disk collocated] *****", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:30", "Monday 20 August 2018 06:32:10 -0400 (0:00:33.312) 0:02:51.508 ********* ", "skipping: [ceph-0] => (item=/dev/vdb) => {\"changed\": false, \"item\": \"/dev/vdb\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vdc) => {\"changed\": false, \"item\": \"/dev/vdc\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vdd) => {\"changed\": false, \"item\": \"/dev/vdd\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vde) => {\"changed\": false, \"item\": \"/dev/vde\", \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=/dev/vdf) => {\"changed\": false, \"item\": \"/dev/vdf\", \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : manually prepare ceph \"filestore\" non-containerized osd disk(s) with collocated osd data and journal] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53", "Monday 20 August 2018 06:32:10 -0400 (0:00:00.071) 0:02:51.580 ********* ", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdb', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdb', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdb', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdb', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdb']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdb\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdb\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdb\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdb\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdb\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdc', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdc', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdc', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdc', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdc']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdc\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdc\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdc\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdc\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdc\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdd', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdd', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdd', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdd', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdd']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdd\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdd\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdd\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdd\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdd\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vde', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vde', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vde', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vde', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vde']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vde\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vde\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vde\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vde\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vde\"], \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item=[{'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'/dev/vdf', u'script': u\"unit 'MiB' print\", '_ansible_no_log': False, u'changed': False, 'failed': False, 'item': u'/dev/vdf', u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/vdf', u'unit': u'MiB'}}, u'disk': {u'dev': u'/dev/vdf', u'physical_block': 512, u'table': u'unknown', u'logical_block': 512, u'model': u'Virtio Block Device', u'unit': u'mib', u'size': 11264.0}, '_ansible_ignore_errors': None, u'partitions': []}, u'/dev/vdf']) => {\"changed\": false, \"item\": [{\"_ansible_ignore_errors\": null, \"_ansible_item_label\": \"/dev/vdf\", \"_ansible_item_result\": true, \"_ansible_no_log\": false, \"_ansible_parsed\": true, \"changed\": false, \"disk\": {\"dev\": \"/dev/vdf\", \"logical_block\": 512, \"model\": \"Virtio Block Device\", \"physical_block\": 512, \"size\": 11264.0, \"table\": \"unknown\", \"unit\": \"mib\"}, \"failed\": false, \"invocation\": {\"module_args\": {\"align\": \"optimal\", \"device\": \"/dev/vdf\", \"flags\": null, \"label\": \"msdos\", \"name\": null, \"number\": null, \"part_end\": \"100%\", \"part_start\": \"0%\", \"part_type\": \"primary\", \"state\": \"info\", \"unit\": \"MiB\"}}, \"item\": \"/dev/vdf\", \"partitions\": [], \"script\": \"unit 'MiB' print\"}, \"/dev/vdf\"], \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/non-collocated.yml] *************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:48", "Monday 20 August 2018 06:32:11 -0400 (0:00:00.106) 0:02:51.687 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include scenarios/lvm.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:56", "Monday 20 August 2018 06:32:11 -0400 (0:00:00.049) 0:02:51.736 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include activate_osds.yml] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:64", "Monday 20 August 2018 06:32:11 -0400 (0:00:00.045) 0:02:51.781 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include start_osds.yml] ***************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:72", "Monday 20 August 2018 06:32:11 -0400 (0:00:00.047) 0:02:51.828 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : include docker/main.yml] **************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:80", "Monday 20 August 2018 06:32:11 -0400 (0:00:00.045) 0:02:51.874 ********* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml for ceph-0", "", "TASK [ceph-osd : include start_docker_osd.yml] *********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/main.yml:2", "Monday 20 August 2018 06:32:11 -0400 (0:00:00.091) 0:02:51.965 ********* ", "included: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml for ceph-0", "", "TASK [ceph-osd : umount ceph disk (if on openstack)] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:4", "Monday 20 August 2018 06:32:11 -0400 (0:00:00.068) 0:02:52.034 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : test if the container image has the disk_list function] *******", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:13", "Monday 20 August 2018 06:32:11 -0400 (0:00:00.051) 0:02:52.085 ********* ", "ok: [ceph-0] => {\"changed\": false, \"cmd\": [\"docker\", \"run\", \"--rm\", \"--entrypoint=stat\", \"192.168.24.1:8787/rhceph:3-11\", \"disk_list.sh\"], \"delta\": \"0:00:00.300563\", \"end\": \"2018-08-20 10:32:11.907338\", \"failed_when_result\": false, \"rc\": 0, \"start\": \"2018-08-20 10:32:11.606775\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \" File: 'disk_list.sh'\\n Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\\nDevice: 2ah/42d\\tInode: 5353940 Links: 1\\nAccess: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\\nAccess: 2018-07-06 17:29:14.000000000 +0000\\nModify: 2018-07-06 17:29:14.000000000 +0000\\nChange: 2018-08-20 10:31:15.775934684 +0000\\n Birth: -\", \"stdout_lines\": [\" File: 'disk_list.sh'\", \" Size: 3726 \\tBlocks: 8 IO Block: 4096 regular file\", \"Device: 2ah/42d\\tInode: 5353940 Links: 1\", \"Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)\", \"Access: 2018-07-06 17:29:14.000000000 +0000\", \"Modify: 2018-07-06 17:29:14.000000000 +0000\", \"Change: 2018-08-20 10:31:15.775934684 +0000\", \" Birth: -\"]}", "", "TASK [ceph-osd : generate ceph osd docker run script] **************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:19", "Monday 20 August 2018 06:32:11 -0400 (0:00:00.521) 0:02:52.607 ********* ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"100bffd271ecfac88d5dd501d37dfca7b05f2102\", \"dest\": \"/usr/share/ceph-osd-run.sh\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8f90e441a65774a9867e35ad6cde7f59\", \"mode\": \"0744\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:usr_t:s0\", \"size\": 964, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761131.99-269983588461395/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : generate systemd unit file] ***********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:30", "Monday 20 August 2018 06:32:12 -0400 (0:00:00.761) 0:02:53.368 ********* ", "changed: [ceph-0] => {\"changed\": true, \"checksum\": \"b7abfb86a4af8d6e54d349965cae96bf9b995c49\", \"dest\": \"/etc/systemd/system/ceph-osd@.service\", \"gid\": 0, \"group\": \"root\", \"md5sum\": \"8a53f95e6590750e7c4807589dd5864c\", \"mode\": \"0644\", \"owner\": \"root\", \"secontext\": \"system_u:object_r:systemd_unit_file_t:s0\", \"size\": 496, \"src\": \"/tmp/ceph_ansible_tmp/ansible-tmp-1534761132.88-63545072408675/source\", \"state\": \"file\", \"uid\": 0}", "", "TASK [ceph-osd : systemd start osd container] **********************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/docker/start_docker_osd.yml:41", "Monday 20 August 2018 06:32:13 -0400 (0:00:00.837) 0:02:54.206 ********* ", "changed: [ceph-0] => (item=/dev/vdb) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdb\", \"name\": \"ceph-osd@vdb\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"systemd-journald.socket system-ceph\\\\x5cx2dosd.slice docker.service basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdb.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdb.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"disabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vdc) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdc\", \"name\": \"ceph-osd@vdc\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"basic.target docker.service system-ceph\\\\x5cx2dosd.slice systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdc.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdc.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vdd) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdd\", \"name\": \"ceph-osd@vdd\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service basic.target system-ceph\\\\x5cx2dosd.slice systemd-journald.socket\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdd.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdd.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vde) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vde\", \"name\": \"ceph-osd@vde\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"docker.service systemd-journald.socket system-ceph\\\\x5cx2dosd.slice basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vde.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vde.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "changed: [ceph-0] => (item=/dev/vdf) => {\"changed\": true, \"enabled\": true, \"item\": \"/dev/vdf\", \"name\": \"ceph-osd@vdf\", \"state\": \"started\", \"status\": {\"ActiveEnterTimestampMonotonic\": \"0\", \"ActiveExitTimestampMonotonic\": \"0\", \"ActiveState\": \"inactive\", \"After\": \"system-ceph\\\\x5cx2dosd.slice docker.service systemd-journald.socket basic.target\", \"AllowIsolate\": \"no\", \"AmbientCapabilities\": \"0\", \"AssertResult\": \"no\", \"AssertTimestampMonotonic\": \"0\", \"Before\": \"shutdown.target\", \"BlockIOAccounting\": \"no\", \"BlockIOWeight\": \"18446744073709551615\", \"CPUAccounting\": \"no\", \"CPUQuotaPerSecUSec\": \"infinity\", \"CPUSchedulingPolicy\": \"0\", \"CPUSchedulingPriority\": \"0\", \"CPUSchedulingResetOnFork\": \"no\", \"CPUShares\": \"18446744073709551615\", \"CanIsolate\": \"no\", \"CanReload\": \"no\", \"CanStart\": \"yes\", \"CanStop\": \"yes\", \"CapabilityBoundingSet\": \"18446744073709551615\", \"ConditionResult\": \"no\", \"ConditionTimestampMonotonic\": \"0\", \"Conflicts\": \"shutdown.target\", \"ControlPID\": \"0\", \"DefaultDependencies\": \"yes\", \"Delegate\": \"no\", \"Description\": \"Ceph OSD\", \"DevicePolicy\": \"auto\", \"EnvironmentFile\": \"/etc/environment (ignore_errors=yes)\", \"ExecMainCode\": \"0\", \"ExecMainExitTimestampMonotonic\": \"0\", \"ExecMainPID\": \"0\", \"ExecMainStartTimestampMonotonic\": \"0\", \"ExecMainStatus\": \"0\", \"ExecStart\": \"{ path=/usr/share/ceph-osd-run.sh ; argv[]=/usr/share/ceph-osd-run.sh %i ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStartPre\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker rm -f ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"ExecStop\": \"{ path=/usr/bin/docker ; argv[]=/usr/bin/docker stop ceph-osd-ceph-0-%i ; ignore_errors=yes ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }\", \"FailureAction\": \"none\", \"FileDescriptorStoreMax\": \"0\", \"FragmentPath\": \"/etc/systemd/system/ceph-osd@.service\", \"GuessMainPID\": \"yes\", \"IOScheduling\": \"0\", \"Id\": \"ceph-osd@vdf.service\", \"IgnoreOnIsolate\": \"no\", \"IgnoreOnSnapshot\": \"no\", \"IgnoreSIGPIPE\": \"yes\", \"InactiveEnterTimestampMonotonic\": \"0\", \"InactiveExitTimestampMonotonic\": \"0\", \"JobTimeoutAction\": \"none\", \"JobTimeoutUSec\": \"0\", \"KillMode\": \"control-group\", \"KillSignal\": \"15\", \"LimitAS\": \"18446744073709551615\", \"LimitCORE\": \"18446744073709551615\", \"LimitCPU\": \"18446744073709551615\", \"LimitDATA\": \"18446744073709551615\", \"LimitFSIZE\": \"18446744073709551615\", \"LimitLOCKS\": \"18446744073709551615\", \"LimitMEMLOCK\": \"65536\", \"LimitMSGQUEUE\": \"819200\", \"LimitNICE\": \"0\", \"LimitNOFILE\": \"4096\", \"LimitNPROC\": \"22974\", \"LimitRSS\": \"18446744073709551615\", \"LimitRTPRIO\": \"0\", \"LimitRTTIME\": \"18446744073709551615\", \"LimitSIGPENDING\": \"22974\", \"LimitSTACK\": \"18446744073709551615\", \"LoadState\": \"loaded\", \"MainPID\": \"0\", \"MemoryAccounting\": \"no\", \"MemoryCurrent\": \"18446744073709551615\", \"MemoryLimit\": \"18446744073709551615\", \"MountFlags\": \"0\", \"Names\": \"ceph-osd@vdf.service\", \"NeedDaemonReload\": \"no\", \"Nice\": \"0\", \"NoNewPrivileges\": \"no\", \"NonBlocking\": \"no\", \"NotifyAccess\": \"none\", \"OOMScoreAdjust\": \"0\", \"OnFailureJobMode\": \"replace\", \"PermissionsStartOnly\": \"no\", \"PrivateDevices\": \"no\", \"PrivateNetwork\": \"no\", \"PrivateTmp\": \"no\", \"ProtectHome\": \"no\", \"ProtectSystem\": \"no\", \"RefuseManualStart\": \"no\", \"RefuseManualStop\": \"no\", \"RemainAfterExit\": \"no\", \"Requires\": \"basic.target\", \"Restart\": \"always\", \"RestartUSec\": \"10s\", \"Result\": \"success\", \"RootDirectoryStartOnly\": \"no\", \"RuntimeDirectoryMode\": \"0755\", \"SameProcessGroup\": \"no\", \"SecureBits\": \"0\", \"SendSIGHUP\": \"no\", \"SendSIGKILL\": \"yes\", \"Slice\": \"system-ceph\\\\x5cx2dosd.slice\", \"StandardError\": \"inherit\", \"StandardInput\": \"null\", \"StandardOutput\": \"journal\", \"StartLimitAction\": \"none\", \"StartLimitBurst\": \"5\", \"StartLimitInterval\": \"10000000\", \"StartupBlockIOWeight\": \"18446744073709551615\", \"StartupCPUShares\": \"18446744073709551615\", \"StatusErrno\": \"0\", \"StopWhenUnneeded\": \"no\", \"SubState\": \"dead\", \"SyslogLevelPrefix\": \"yes\", \"SyslogPriority\": \"30\", \"SystemCallErrorNumber\": \"0\", \"TTYReset\": \"no\", \"TTYVHangup\": \"no\", \"TTYVTDisallocate\": \"no\", \"TasksAccounting\": \"no\", \"TasksCurrent\": \"18446744073709551615\", \"TasksMax\": \"18446744073709551615\", \"TimeoutStartUSec\": \"2min\", \"TimeoutStopUSec\": \"15s\", \"TimerSlackNSec\": \"50000\", \"Transient\": \"no\", \"Type\": \"simple\", \"UMask\": \"0022\", \"UnitFilePreset\": \"disabled\", \"UnitFileState\": \"enabled\", \"Wants\": \"system-ceph\\\\x5cx2dosd.slice\", \"WatchdogTimestampMonotonic\": \"0\", \"WatchdogUSec\": \"0\"}}", "", "TASK [ceph-osd : set_fact openstack_keys_tmp - preserve backward compatibility after the introduction of the ceph_keys module] ***", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:87", "Monday 20 August 2018 06:32:16 -0400 (0:00:02.945) 0:02:57.152 ********* ", "skipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'profile rbd', u'osd': u'profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics'}, u'mode': u'0600', u'key': u'AQB3kXpbAAAAABAAcCPNLLBq5L8h/sbL3v6wkQ==', u'name': u'client.openstack'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"profile rbd\", \"osd\": \"profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics\"}, \"key\": \"AQB3kXpbAAAAABAAcCPNLLBq5L8h/sbL3v6wkQ==\", \"mode\": \"0600\", \"name\": \"client.openstack\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={u'caps': {u'mds': u'allow *', u'osd': u'allow rw', u'mgr': u'allow *', u'mon': u\"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\"}, u'mode': u'0600', u'key': u'AQB3kXpbAAAAABAAxER5sPH7n06jJRAeMBD9HQ==', u'name': u'client.manila'}) => {\"changed\": false, \"item\": {\"caps\": {\"mds\": \"allow *\", \"mgr\": \"allow *\", \"mon\": \"allow r, allow command 'auth del', allow command 'auth caps', allow command 'auth get', allow command 'auth get-or-create'\", \"osd\": \"allow rw\"}, \"key\": \"AQB3kXpbAAAAABAAxER5sPH7n06jJRAeMBD9HQ==\", \"mode\": \"0600\", \"name\": \"client.manila\"}, \"skip_reason\": \"Conditional result was False\"}", "skipping: [ceph-0] => (item={u'caps': {u'mgr': u'allow *', u'mon': u'allow rw', u'osd': u'allow rwx'}, u'mode': u'0600', u'key': u'AQB3kXpbAAAAABAAn7BFhvmwvmOaea/Tu5WRSA==', u'name': u'client.radosgw'}) => {\"changed\": false, \"item\": {\"caps\": {\"mgr\": \"allow *\", \"mon\": \"allow rw\", \"osd\": \"allow rwx\"}, \"key\": \"AQB3kXpbAAAAABAAn7BFhvmwvmOaea/Tu5WRSA==\", \"mode\": \"0600\", \"name\": \"client.radosgw\"}, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : set_fact keys - override keys_tmp with keys] ******************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/main.yml:95", "Monday 20 August 2018 06:32:16 -0400 (0:00:00.079) 0:02:57.231 ********* ", "skipping: [ceph-0] => {\"changed\": false, \"skip_reason\": \"Conditional result was False\"}", "", "TASK [ceph-osd : wait for all osd to be up] ************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:2", "Monday 20 August 2018 06:32:16 -0400 (0:00:00.077) 0:02:57.309 ********* ", "changed: [ceph-0 -> 192.168.24.12] => {\"attempts\": 1, \"changed\": true, \"cmd\": \"test \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_osds\\\"])')\\\" = \\\"$(docker exec ceph-mon-controller-0 ceph --cluster ceph -s -f json | python -c 'import sys, json; print(json.load(sys.stdin)[\\\"osdmap\\\"][\\\"osdmap\\\"][\\\"num_up_osds\\\"])')\\\"\", \"delta\": \"0:00:00.825873\", \"end\": \"2018-08-20 10:32:17.924597\", \"rc\": 0, \"start\": \"2018-08-20 10:32:17.098724\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : list existing pool(s)] ****************************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:12", "Monday 20 August 2018 06:32:18 -0400 (0:00:01.389) 0:02:58.698 ********* ", "changed: [ceph-0 -> 192.168.24.12] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'images', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"images\", \"size\"], \"delta\": \"0:00:00.393910\", \"end\": \"2018-08-20 10:32:18.699797\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"images\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:18.305887\", \"stderr\": \"Error ENOENT: unrecognized pool 'images'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'images'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.12] => (item={u'application': u'openstack_gnocchi', u'pg_num': 32, u'name': u'metrics', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"metrics\", \"size\"], \"delta\": \"0:00:00.372823\", \"end\": \"2018-08-20 10:32:19.287691\", \"failed_when_result\": false, \"item\": {\"application\": \"openstack_gnocchi\", \"name\": \"metrics\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:18.914868\", \"stderr\": \"Error ENOENT: unrecognized pool 'metrics'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'metrics'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.12] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'backups', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"backups\", \"size\"], \"delta\": \"0:00:00.342221\", \"end\": \"2018-08-20 10:32:19.833503\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"backups\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:19.491282\", \"stderr\": \"Error ENOENT: unrecognized pool 'backups'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'backups'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.12] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'vms', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"vms\", \"size\"], \"delta\": \"0:00:00.352919\", \"end\": \"2018-08-20 10:32:20.395552\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"vms\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:20.042633\", \"stderr\": \"Error ENOENT: unrecognized pool 'vms'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'vms'\"], \"stdout\": \"\", \"stdout_lines\": []}", "changed: [ceph-0 -> 192.168.24.12] => (item={u'application': u'rbd', u'pg_num': 32, u'name': u'volumes', u'rule_name': u'replicated_rule'}) => {\"changed\": true, \"cmd\": [\"docker\", \"exec\", \"ceph-mon-controller-0\", \"ceph\", \"--cluster\", \"ceph\", \"osd\", \"pool\", \"get\", \"volumes\", \"size\"], \"delta\": \"0:00:00.323752\", \"end\": \"2018-08-20 10:32:20.905134\", \"failed_when_result\": false, \"item\": {\"application\": \"rbd\", \"name\": \"volumes\", \"pg_num\": 32, \"rule_name\": \"replicated_rule\"}, \"msg\": \"non-zero return code\", \"rc\": 2, \"start\": \"2018-08-20 10:32:20.581382\", \"stderr\": \"Error ENOENT: unrecognized pool 'volumes'\", \"stderr_lines\": [\"Error ENOENT: unrecognized pool 'volumes'\"], \"stdout\": \"\", \"stdout_lines\": []}", "", "TASK [ceph-osd : set_fact rule_name before luminous] ***************************", "task path: /usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml:21", "Monday 20 August 2018 06:32:20 -0400 (0:00:02.915) 0:03:01.613 ********* ", "fatal: [ceph-0]: FAILED! => {\"msg\": \"The conditional check 'ceph_release_num[ceph_stable_release] < ceph_release_num['luminous']' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_stable_release] < ceph_release_num['luminous']): 'dict object' has no attribute u'dummy'\\n\\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-osd/tasks/openstack_config.yml': line 21, column 3, but may\\nbe elsewhere in the file depending on the exact syntax problem.\\n\\nThe offending line appears to be:\\n\\n\\n- name: set_fact rule_name before luminous\\n ^ here\\n\"}", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called before restart] *******", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.032) 0:03:01.646 ********* ", "", "RUNNING HANDLER [ceph-defaults : copy mon restart script] **********************", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.646 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - non container] ***", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.647 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mon daemon(s) - container] *******", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.647 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _mon_handler_called after restart] ********", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.648 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called before restart] *******", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.648 ********* ", "", "RUNNING HANDLER [ceph-defaults : copy osd restart script] **********************", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.648 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - non container] ***", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.649 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph osds daemon(s) - container] ******", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.649 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _osd_handler_called after restart] ********", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.649 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called before restart] *******", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.650 ********* ", "", "RUNNING HANDLER [ceph-defaults : copy mds restart script] **********************", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.650 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - non container] ***", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.651 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mds daemon(s) - container] *******", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.651 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _mds_handler_called after restart] ********", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.651 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called before restart] *******", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.652 ********* ", "", "RUNNING HANDLER [ceph-defaults : copy rgw restart script] **********************", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.652 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - non container] ***", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.652 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph rgw daemon(s) - container] *******", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.653 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _rgw_handler_called after restart] ********", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.653 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called before restart] ***", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.653 ********* ", "", "RUNNING HANDLER [ceph-defaults : copy rbd mirror restart script] ***************", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.654 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - non container] ***", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.654 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph rbd mirror daemon(s) - container] ***", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.654 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _rbdmirror_handler_called after restart] ***", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.655 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called before restart] *******", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.656 ********* ", "", "RUNNING HANDLER [ceph-defaults : copy mgr restart script] **********************", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.656 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - non container] ***", "Monday 20 August 2018 06:32:20 -0400 (0:00:00.000) 0:03:01.656 ********* ", "", "RUNNING HANDLER [ceph-defaults : restart ceph mgr daemon(s) - container] *******", "Monday 20 August 2018 06:32:21 -0400 (0:00:00.000) 0:03:01.657 ********* ", "", "RUNNING HANDLER [ceph-defaults : set _mgr_handler_called after restart] ********", "Monday 20 August 2018 06:32:21 -0400 (0:00:00.000) 0:03:01.657 ********* ", "", "PLAY RECAP *********************************************************************", "ceph-0 : ok=68 changed=15 unreachable=0 failed=1 ", "compute-0 : ok=2 changed=0 unreachable=0 failed=0 ", "controller-0 : ok=121 changed=22 unreachable=0 failed=0 ", "", "", "INSTALLER STATUS ***************************************************************", "Install Ceph Monitor : Complete (0:01:01)", "Install Ceph Manager : Complete (0:00:24)", "Install Ceph OSD : In Progress (0:01:24)", "\tThis phase can be restarted by running: roles/ceph-osd/tasks/main.yml", "", "Monday 20 August 2018 06:32:21 -0400 (0:00:00.004) 0:03:01.662 ********* ", "=============================================================================== "]} >2018-08-20 06:32:21,418 p=1013 u=mistral | NO MORE HOSTS LEFT ************************************************************* >2018-08-20 06:32:21,419 p=1013 u=mistral | PLAY RECAP ********************************************************************* >2018-08-20 06:32:21,419 p=1013 u=mistral | ceph-0 : ok=99 changed=46 unreachable=0 failed=0 >2018-08-20 06:32:21,419 p=1013 u=mistral | compute-0 : ok=117 changed=57 unreachable=0 failed=0 >2018-08-20 06:32:21,419 p=1013 u=mistral | controller-0 : ok=157 changed=78 unreachable=0 failed=0 >2018-08-20 06:32:21,419 p=1013 u=mistral | undercloud : ok=22 changed=11 unreachable=0 failed=1 >2018-08-20 06:32:21,427 p=1013 u=mistral | Monday 20 August 2018 06:32:21 -0400 (0:03:05.474) 0:13:03.457 ********* >2018-08-20 06:32:21,428 p=1013 u=mistral | ===============================================================================
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1619212
: 1477127