Bug 1460166

Summary: Create cluster job has not failed even if it should fail
Product: [Red Hat Storage] Red Hat Storage Console Reporter: Martin Kudlej <mkudlej>
Component: Ceph IntegrationAssignee: Nishanth Thomas <nthomas>
Status: CLOSED WONTFIX QA Contact: sds-qe-bugs
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3CC: mbukatov, mkarnik, mkudlej, nthomas, ppenicka, sankarshan
Target Milestone: ---   
Target Release: 4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: tendrl-commons-3.0-alpha.10.el7scon.noarch.rpm Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-19 05:41:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Martin Kudlej 2017-06-09 10:07:04 UTC
Description of problem:
I've tried to create ceph cluster and one of cpeh-installer jobs ended with non-zero return code.
Cluster create job is still in processing(not failed) even if ceph-installer job has failed.


Version-Release number of selected component (if applicable):
ceph-ansible-2.2.11-1.el7scon.noarch
ceph-base-11.2.0-0.el7.x86_64
ceph-common-11.2.0-0.el7.x86_64
ceph-installer-1.3.0-1.el7scon.noarch
ceph-mon-11.2.0-0.el7.x86_64
ceph-osd-11.2.0-0.el7.x86_64
ceph-selinux-11.2.0-0.el7.x86_64
etcd-3.1.7-1.el7.x86_64
libcephfs2-11.2.0-0.el7.x86_64
python-cephfs-11.2.0-0.el7.x86_64
python-etcd-0.4.5-1.noarch
rubygem-etcd-0.3.0-1.el7.noarch
tendrl-alerting-3.0-alpha.3.el7scon.noarch
tendrl-api-3.0-alpha.4.el7scon.noarch
tendrl-api-doc-3.0-alpha.4.el7scon.noarch
tendrl-api-httpd-3.0-alpha.4.el7scon.noarch
tendrl-commons-3.0-alpha.8.el7scon.noarch
tendrl-dashboard-3.0-alpha.4.el7scon.noarch
tendrl-node-agent-3.0-alpha.8.el7scon.noarch
tendrl-node-monitoring-3.0-alpha.4.el7scon.noarch
tendrl-performance-monitoring-3.0-alpha.6.el7scon.noarch

How reproducible:
I think 100%

Steps to Reproduce:
1. try to create cluster
2. wait till there is ceph-installer failure

Actual results:
Create job cluster hasn't failed even if sub-job failed.

Expected results:
Create job cluster should fail if sub-job fails.

Additional info:
{
  "endpoint": "/api/mon/configure",
  "succeeded": false,
  "stdout": "Using /usr/share/ceph-ansible/ansible.cfg as config file\n\nPLAY [mons,agents,osds,mdss,rgws,nfss,restapis,rbdmirrors,clients,mgrs] ********\n\nTASK [check for python2] *******************************************************\nok: [10.70.16.210] => {\n    \"changed\": false, \n    \"stat\": {\n        \"atime\": 1496912085.322, \n        \"ctime\": 1495131703.365, \n        \"dev\": 64769, \n        \"executable\": true, \n        \"exists\": true, \n        \"gid\": 0, \n        \"gr_name\": \"root\", \n        \"inode\": 12730281, \n        \"isblk\": false, \n        \"ischr\": false, \n        \"isdir\": false, \n        \"isfifo\": false, \n        \"isgid\": false, \n        \"islnk\": true, \n        \"isreg\": false, \n        \"issock\": false, \n        \"isuid\": false, \n        \"lnk_source\": \"/usr/bin/python2.7\", \n        \"mode\": \"0777\", \n        \"mtime\": 1495131703.365, \n        \"nlink\": 1, \n        \"path\": \"/usr/bin/python\", \n        \"pw_name\": \"root\", \n        \"readable\": true, \n        \"rgrp\": true, \n        \"roth\": true, \n        \"rusr\": true, \n        \"size\": 7, \n        \"uid\": 0, \n        \"wgrp\": true, \n        \"woth\": true, \n        \"writeable\": false, \n        \"wusr\": true, \n        \"xgrp\": true, \n        \"xoth\": true, \n        \"xusr\": true\n    }\n}\n\nTASK [install python2 for Debian based systems] ********************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [install python2 for Fedora] **********************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [gathering facts] *********************************************************\nok: [10.70.16.210]\n\nTASK [install required packages for Fedora > 23] *******************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nPLAY [mons] ********************************************************************\n\nTASK [ceph.ceph-common : fail on unsupported system] ***************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : fail on unsupported architecture] *********************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : fail on unsupported distribution] *********************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : fail on unsupported distribution for red hat ceph storage] ***\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : fail on unsupported distribution for ubuntu cloud archive] ***\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : fail on unsupported ansible version] ******************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : fail if systemd is not present] ***********************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nok: [10.70.16.210] => {\n    \"ansible_facts\": {\n        \"ceph_release\": \"kraken\"\n    }, \n    \"changed\": false\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nskipping: [10.70.16.210] => (item={'key': u'firefly', 'value': 0.8})  => {\n    \"changed\": false, \n    \"item\": {\n        \"key\": \"firefly\", \n        \"value\": 0.8\n    }, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\nskipping: [10.70.16.210] => (item={'key': u'giant', 'value': 0.87})  => {\n    \"changed\": false, \n    \"item\": {\n        \"key\": \"giant\", \n        \"value\": 0.87\n    }, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\nskipping: [10.70.16.210] => (item={'key': u'infernalis', 'value': 9})  => {\n    \"changed\": false, \n    \"item\": {\n        \"key\": \"infernalis\", \n        \"value\": 9\n    }, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\nskipping: [10.70.16.210] => (item={'key': u'hammer', 'value': 0.94})  => {\n    \"changed\": false, \n    \"item\": {\n        \"key\": \"hammer\", \n        \"value\": 0.94\n    }, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\nskipping: [10.70.16.210] => (item={'key': u'kraken', 'value': 11})  => {\n    \"changed\": false, \n    \"item\": {\n        \"key\": \"kraken\", \n        \"value\": 11\n    }, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\nskipping: [10.70.16.210] => (item={'key': u'luminous', 'value': 12})  => {\n    \"changed\": false, \n    \"item\": {\n        \"key\": \"luminous\", \n        \"value\": 12\n    }, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\nskipping: [10.70.16.210] => (item={'key': u'emperor', 'value': 0.72})  => {\n    \"changed\": false, \n    \"item\": {\n        \"key\": \"emperor\", \n        \"value\": 0.72\n    }, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\nskipping: [10.70.16.210] => (item={'key': u'jewel', 'value': 10})  => {\n    \"changed\": false, \n    \"item\": {\n        \"key\": \"jewel\", \n        \"value\": 10\n    }, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\nskipping: [10.70.16.210] => (item={'key': u'dumpling', 'value': 0.67})  => {\n    \"changed\": false, \n    \"item\": {\n        \"key\": \"dumpling\", \n        \"value\": 0.67\n    }, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : include] **********************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : include] **********************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : include] **********************************************\nincluded: /usr/share/ceph-ansible/roles/ceph-common/tasks/./misc/ntp_redhat.yml for 10.70.16.210\n\nTASK [ceph.ceph-common : include] **********************************************\nincluded: /usr/share/ceph-ansible/roles/ceph-common/tasks/checks/check_ntp_redhat.yml for 10.70.16.210\n\nTASK [ceph.ceph-common : check ntp installation on redhat] *********************\nfatal: [10.70.16.210]: FAILED! => {\n    \"changed\": false, \n    \"cmd\": [\n        \"rpm\", \n        \"-q\", \n        \"ntp\"\n    ], \n    \"delta\": \"0:00:00.021889\", \n    \"end\": \"2017-06-08 08:31:52.092224\", \n    \"failed\": true, \n    \"rc\": 1, \n    \"start\": \"2017-06-08 08:31:52.070335\", \n    \"warnings\": [\n        \"Consider using yum, dnf or zypper module rather than running rpm\"\n    ]\n}\n\nSTDOUT:\n\npackage ntp is not installed\n...ignoring\n\nTASK [ceph.ceph-common : install ntp on redhat] ********************************\nchanged: [10.70.16.210] => {\n    \"changed\": true, \n    \"rc\": 0, \n    \"results\": [\n        \"Loaded plugins: search-disabled-repos, subscription-manager\\nResolving Dependencies\\n--> Running transaction check\\n---> Package ntp.x86_64 0:4.2.6p5-25.el7_3.2 will be installed\\n--> Processing Dependency: ntpdate = 4.2.6p5-25.el7_3.2 for package: ntp-4.2.6p5-25.el7_3.2.x86_64\\n--> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-25.el7_3.2.x86_64\\n--> Running transaction check\\n---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed\\n---> Package ntpdate.x86_64 0:4.2.6p5-25.el7_3.2 will be installed\\n--> Finished Dependency Resolution\\n\\nDependencies Resolved\\n\\n================================================================================\\n Package             Arch       Version                 Repository         Size\\n================================================================================\\nInstalling:\\n ntp                 x86_64     4.2.6p5-25.el7_3.2      RHEL-7.4-beta     547 k\\nInstalling for dependencies:\\n autogen-libopts     x86_64     5.18-5.el7              RHEL-7.4-beta      66 k\\n ntpdate             x86_64     4.2.6p5-25.el7_3.2      RHEL-7.4-beta      86 k\\n\\nTransaction Summary\\n================================================================================\\nInstall  1 Package (+2 Dependent packages)\\n\\nTotal download size: 699 k\\nInstalled size: 1.6 M\\nDownloading packages:\\n--------------------------------------------------------------------------------\\nTotal                                              3.2 MB/s | 699 kB  00:00     \\nRunning transaction check\\nRunning transaction test\\nTransaction test succeeded\\nRunning transaction\\n  Installing : ntpdate-4.2.6p5-25.el7_3.2.x86_64                            1/3 \\n  Installing : autogen-libopts-5.18-5.el7.x86_64                            2/3 \\n  Installing : ntp-4.2.6p5-25.el7_3.2.x86_64                                3/3 \\n  Verifying  : autogen-libopts-5.18-5.el7.x86_64                            1/3 \\n  Verifying  : ntp-4.2.6p5-25.el7_3.2.x86_64                                2/3 \\n  Verifying  : ntpdate-4.2.6p5-25.el7_3.2.x86_64                            3/3 \\n\\nInstalled:\\n  ntp.x86_64 0:4.2.6p5-25.el7_3.2                                               \\n\\nDependency Installed:\\n  autogen-libopts.x86_64 0:5.18-5.el7    ntpdate.x86_64 0:4.2.6p5-25.el7_3.2   \\n\\nComplete!\\n\"\n    ]\n}\n\nTASK [ceph.ceph-common : start the ntp service] ********************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : include] **********************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : get ceph version] *************************************\nok: [10.70.16.210] => {\n    \"changed\": false, \n    \"cmd\": [\n        \"ceph\", \n        \"--version\"\n    ], \n    \"delta\": \"0:00:00.152359\", \n    \"end\": \"2017-06-08 08:32:19.693256\", \n    \"rc\": 0, \n    \"start\": \"2017-06-08 08:32:19.540897\", \n    \"warnings\": []\n}\n\nSTDOUT:\n\nceph version 11.2.0 (f223e27eeb35991352ebc1f67423d4ebc252adb7)\n\nTASK [ceph.ceph-common : is ceph running already?] *****************************\nok: [10.70.16.210 -> 10.70.16.210] => {\n    \"changed\": false, \n    \"cmd\": [\n        \"ceph\", \n        \"--connect-timeout\", \n        \"3\", \n        \"--cluster\", \n        \"test_ceph\", \n        \"fsid\"\n    ], \n    \"delta\": \"0:00:00.128046\", \n    \"end\": \"2017-06-08 08:32:20.390017\", \n    \"failed\": false, \n    \"failed_when_result\": false, \n    \"rc\": 1, \n    \"start\": \"2017-06-08 08:32:20.261971\", \n    \"warnings\": []\n}\n\nSTDERR:\n\nError initializing cluster client: Error('error calling conf_read_file: error code 22',)\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : create a local fetch directory if it does not exist] **\nok: [10.70.16.210 -> localhost] => {\n    \"changed\": false, \n    \"gid\": 989, \n    \"group\": \"ceph-installer\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph-installer\", \n    \"path\": \"/var/lib/ceph-installer/fetch\", \n    \"secontext\": \"system_u:object_r:ceph_installer_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 992\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nok: [10.70.16.210] => {\n    \"ansible_facts\": {\n        \"monitor_name\": \"mkudlej-usm2-mon1\"\n    }, \n    \"changed\": false\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : check if /var/lib/ceph-installer/fetch directory exists] ***\nok: [10.70.16.210 -> localhost] => {\n    \"changed\": false, \n    \"stat\": {\n        \"exists\": false\n    }\n}\n\nTASK [ceph.ceph-common : check if /var/lib/ceph/mon/test_ceph-mkudlej-usm2-mon1/keyring already exists] ***\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : fail if /var/lib/ceph/mon/test_ceph-mkudlej-usm2-mon1/keyring doesn't exist] ***\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : get existing initial mon keyring if it already exists but not monitor_keyring.conf in /var/lib/ceph-installer/fetch] ***\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : test existing initial mon keyring] ********************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : fail if initial mon keyring found doesn't work] *******\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : write initial mon keyring in /var/lib/ceph-installer/fetch/monitor_keyring.conf if it doesn't exist] ***\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : put initial mon keyring in mon kv store] **************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nok: [10.70.16.210] => {\n    \"ansible_facts\": {\n        \"ceph_version\": \"11.2.0\"\n    }, \n    \"changed\": false\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nok: [10.70.16.210] => {\n    \"ansible_facts\": {\n        \"mds_name\": \"mkudlej-usm2-mon1\"\n    }, \n    \"changed\": false\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nok: [10.70.16.210] => {\n    \"ansible_facts\": {\n        \"rbd_client_directory_owner\": \"ceph\"\n    }, \n    \"changed\": false\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nok: [10.70.16.210] => {\n    \"ansible_facts\": {\n        \"rbd_client_directory_group\": \"ceph\"\n    }, \n    \"changed\": false\n}\n\nTASK [ceph.ceph-common : set_fact] *********************************************\nok: [10.70.16.210] => {\n    \"ansible_facts\": {\n        \"rbd_client_directory_mode\": \"0770\"\n    }, \n    \"changed\": false\n}\n\nTASK [ceph.ceph-common : check for a ceph socket] ******************************\nok: [10.70.16.210] => {\n    \"changed\": false, \n    \"cmd\": \"stat /var/run/ceph/*.asok > /dev/null 2>&1\", \n    \"delta\": \"0:00:00.022567\", \n    \"end\": \"2017-06-08 08:32:22.867005\", \n    \"failed\": false, \n    \"failed_when_result\": false, \n    \"rc\": 1, \n    \"start\": \"2017-06-08 08:32:22.844438\", \n    \"warnings\": []\n}\n\nTASK [ceph.ceph-common : check for a rados gateway socket] *********************\nok: [10.70.16.210] => {\n    \"changed\": false, \n    \"cmd\": \"stat /var/run/ceph*.asok > /dev/null 2>&1\", \n    \"delta\": \"0:00:00.007687\", \n    \"end\": \"2017-06-08 08:32:23.441480\", \n    \"failed\": false, \n    \"failed_when_result\": false, \n    \"rc\": 1, \n    \"start\": \"2017-06-08 08:32:23.433793\", \n    \"warnings\": []\n}\n\nTASK [ceph.ceph-common : create ceph initial directories] **********************\nchanged: [10.70.16.210] => (item=/etc/ceph) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/etc/ceph\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/etc/ceph\", \n    \"secontext\": \"system_u:object_r:etc_t:s0\", \n    \"size\": 20, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/lib/ceph/) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/lib/ceph/\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/\", \n    \"secontext\": \"system_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 91, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/lib/ceph/mon) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/lib/ceph/mon\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/mon\", \n    \"secontext\": \"system_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/lib/ceph/osd) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/lib/ceph/osd\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/osd\", \n    \"secontext\": \"unconfined_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/lib/ceph/mds) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/lib/ceph/mds\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/mds\", \n    \"secontext\": \"unconfined_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/lib/ceph/tmp) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/lib/ceph/tmp\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/tmp\", \n    \"secontext\": \"system_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/lib/ceph/radosgw) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/lib/ceph/radosgw\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/radosgw\", \n    \"secontext\": \"unconfined_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/lib/ceph/bootstrap-rgw) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/lib/ceph/bootstrap-rgw\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/bootstrap-rgw\", \n    \"secontext\": \"system_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/lib/ceph/bootstrap-mds) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/lib/ceph/bootstrap-mds\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/bootstrap-mds\", \n    \"secontext\": \"system_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/lib/ceph/bootstrap-osd) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/lib/ceph/bootstrap-osd\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/bootstrap-osd\", \n    \"secontext\": \"system_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\n\nTASK [ceph.ceph-common : generate cluster fsid] ********************************\nchanged: [10.70.16.210 -> localhost] => {\n    \"changed\": true, \n    \"cmd\": \"python -c 'import uuid; print(str(uuid.uuid4()))' | tee /var/lib/ceph-installer/fetch/ceph_cluster_uuid.conf\", \n    \"delta\": \"0:00:00.049398\", \n    \"end\": \"2017-06-08 08:32:28.969516\", \n    \"rc\": 0, \n    \"start\": \"2017-06-08 08:32:28.920118\", \n    \"warnings\": []\n}\n\nSTDOUT:\n\n4b7e57ce-ad96-4d50-980b-49b8342e709a\n\nTASK [ceph.ceph-common : reuse cluster fsid when cluster is already running] ***\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : read cluster fsid if it already exists] ***************\nok: [10.70.16.210 -> localhost] => {\n    \"changed\": false, \n    \"cmd\": [\n        \"cat\", \n        \"/var/lib/ceph-installer/fetch/ceph_cluster_uuid.conf\"\n    ], \n    \"delta\": \"0:00:00.004036\", \n    \"end\": \"2017-06-08 08:32:29.337414\", \n    \"rc\": 0, \n    \"start\": \"2017-06-08 08:32:29.333378\", \n    \"warnings\": []\n}\n\nSTDOUT:\n\n4b7e57ce-ad96-4d50-980b-49b8342e709a\n\nTASK [ceph.ceph-common : create ceph conf directory and assemble directory] ****\nok: [10.70.16.210] => (item=/etc/ceph/) => {\n    \"changed\": false, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/etc/ceph/\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/etc/ceph/\", \n    \"secontext\": \"system_u:object_r:etc_t:s0\", \n    \"size\": 20, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/etc/ceph/ceph.d/) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/etc/ceph/ceph.d/\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/etc/ceph/ceph.d/\", \n    \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\n\nTASK [ceph.ceph-common : generate ceph configuration file: test_ceph.conf] *****\nchanged: [10.70.16.210] => {\n    \"changed\": true, \n    \"checksum\": \"f9c75683e8f50a3ec5c1704b76040aa285bbf82e\", \n    \"dest\": \"/etc/ceph/ceph.d/test_ceph.conf\", \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"md5sum\": \"5b2791b7c9573761ce0576b01fa8a4f3\", \n    \"mode\": \"0644\", \n    \"owner\": \"ceph\", \n    \"secontext\": \"system_u:object_r:etc_t:s0\", \n    \"size\": 703, \n    \"src\": \"/home/ceph-installer/.ansible/tmp/ansible-tmp-1496925150.58-169221031356165/source\", \n    \"state\": \"file\", \n    \"uid\": 167\n}\n\nTASK [ceph.ceph-common : assemble test_ceph.conf and fragments] ****************\nchanged: [10.70.16.210] => {\n    \"changed\": true, \n    \"checksum\": \"f9c75683e8f50a3ec5c1704b76040aa285bbf82e\", \n    \"dest\": \"/etc/ceph/test_ceph.conf\", \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"md5sum\": \"5b2791b7c9573761ce0576b01fa8a4f3\", \n    \"mode\": \"0644\", \n    \"owner\": \"ceph\", \n    \"secontext\": \"system_u:object_r:etc_t:s0\", \n    \"size\": 703, \n    \"src\": \"/etc/ceph/ceph.d/\", \n    \"state\": \"file\", \n    \"uid\": 167\n}\n\nMSG:\n\nOK\n\nTASK [ceph.ceph-common : create rbd client directory] **************************\nok: [10.70.16.210] => (item=/var/run/ceph) => {\n    \"changed\": false, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/run/ceph\", \n    \"mode\": \"0770\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/run/ceph\", \n    \"secontext\": \"system_u:object_r:ceph_var_run_t:s0\", \n    \"size\": 40, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\nchanged: [10.70.16.210] => (item=/var/log/ceph) => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"item\": \"/var/log/ceph\", \n    \"mode\": \"0770\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/log/ceph\", \n    \"secontext\": \"system_u:object_r:ceph_log_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\n\nTASK [ceph.ceph-common : configure cluster name] *******************************\nchanged: [10.70.16.210] => {\n    \"backup\": \"\", \n    \"changed\": true\n}\n\nMSG:\n\nline added\n\nTASK [ceph.ceph-common : check /etc/default/ceph exist] ************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : configure cluster name] *******************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-common : configure cluster name] *******************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : fail if systemd is not present] ****************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : check if it is atomic host] ********************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : set fact for using atomic host] ****************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : allow apt to use a repository over https (debian)] ***\nskipping: [10.70.16.210] => (item=[])  => {\n    \"changed\": false, \n    \"item\": [], \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : add docker's gpg key] **************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : add docker and debian testing repository] ******\nskipping: [10.70.16.210] => (item=deb https://apt.dockerproject.org/repo/ debian-Maipo main)  => {\n    \"changed\": false, \n    \"item\": \"deb https://apt.dockerproject.org/repo/ debian-Maipo main\", \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\nskipping: [10.70.16.210] => (item=deb http://http.us.debian.org/debian/ testing contrib main)  => {\n    \"changed\": false, \n    \"item\": \"deb http://http.us.debian.org/debian/ testing contrib main\", \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install pip from testing on debian] ************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install docker-py via pip for debian] **********\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install docker on debian] **********************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install six via pip] ***************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install docker on ubuntu] **********************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : enable extras on centos] ***********************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install python-six] ****************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install python-docker-py on red hat / centos] **\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install python-docker on ubuntu] ***************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install docker on red hat / centos] ************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : pause after docker install before starting (on openstack vms)] ***\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : start docker service] **************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : install ntp] ***********************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : set_fact] **************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph.ceph-docker-common : set_fact] **************************************\nskipping: [10.70.16.210] => {\n    \"changed\": false, \n    \"skip_reason\": \"Conditional check failed\", \n    \"skipped\": true\n}\n\nTASK [ceph-mon : generate monitor initial keyring] *****************************\nchanged: [10.70.16.210 -> localhost] => {\n    \"changed\": true, \n    \"cmd\": \"python2 -c \\\"import os ; import struct ; import time; import base64 ; key = os.urandom(16) ; header = struct.pack('<hiih',1,int(time.time()),0,len(key)) ; print base64.b64encode(header + key)\\\" | tee /var/lib/ceph-installer/fetch/monitor_keyring.conf\", \n    \"delta\": \"0:00:00.032012\", \n    \"end\": \"2017-06-08 08:32:37.019016\", \n    \"rc\": 0, \n    \"start\": \"2017-06-08 08:32:36.987004\", \n    \"warnings\": []\n}\n\nSTDOUT:\n\nAQDlQzlZAAAAABAAClloNA8YuZIC7w4ZZHv+/g==\n\nTASK [ceph-mon : read monitor initial keyring if it already exists] ************\nok: [10.70.16.210 -> localhost] => {\n    \"changed\": false, \n    \"cmd\": [\n        \"cat\", \n        \"/var/lib/ceph-installer/fetch/monitor_keyring.conf\"\n    ], \n    \"delta\": \"0:00:00.003408\", \n    \"end\": \"2017-06-08 08:32:37.319675\", \n    \"rc\": 0, \n    \"start\": \"2017-06-08 08:32:37.316267\", \n    \"warnings\": []\n}\n\nSTDOUT:\n\nAQDlQzlZAAAAABAAClloNA8YuZIC7w4ZZHv+/g==\n\nTASK [ceph-mon : create monitor initial keyring] *******************************\nchanged: [10.70.16.210] => {\n    \"changed\": true, \n    \"cmd\": [\n        \"ceph-authtool\", \n        \"/var/lib/ceph/tmp/keyring.mon.mkudlej-usm2-mon1\", \n        \"--create-keyring\", \n        \"--name=mon.\", \n        \"--add-key=AQDwLjlZAAAAABAAzN26Ok57p/A2rCIS9DcWoQ==\", \n        \"--cap\", \n        \"mon\", \n        \"allow *\"\n    ], \n    \"delta\": \"0:00:00.021842\", \n    \"end\": \"2017-06-08 08:32:37.933096\", \n    \"rc\": 0, \n    \"start\": \"2017-06-08 08:32:37.911254\", \n    \"warnings\": []\n}\n\nSTDOUT:\n\ncreating /var/lib/ceph/tmp/keyring.mon.mkudlej-usm2-mon1\nadded entity mon. auth auth(auid = 18446744073709551615 key=AQDwLjlZAAAAABAAzN26Ok57p/A2rCIS9DcWoQ== with 0 caps)\n\nTASK [ceph-mon : set initial monitor key permissions] **************************\nchanged: [10.70.16.210] => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"mode\": \"0600\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/tmp/keyring.mon.mkudlej-usm2-mon1\", \n    \"secontext\": \"unconfined_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 77, \n    \"state\": \"file\", \n    \"uid\": 167\n}\n\nTASK [ceph-mon : create monitor directory] *************************************\nchanged: [10.70.16.210] => {\n    \"changed\": true, \n    \"gid\": 167, \n    \"group\": \"ceph\", \n    \"mode\": \"0755\", \n    \"owner\": \"ceph\", \n    \"path\": \"/var/lib/ceph/mon/test_ceph-mkudlej-usm2-mon1\", \n    \"secontext\": \"unconfined_u:object_r:ceph_var_lib_t:s0\", \n    \"size\": 6, \n    \"state\": \"directory\", \n    \"uid\": 167\n}\n\nTASK [ceph-mon : ceph monitor mkfs with keyring] *******************************\nfatal: [10.70.16.210]: FAILED! => {\n    \"changed\": true, \n    \"cmd\": [\n        \"ceph-mon\", \n        \"--cluster\", \n        \"test_ceph\", \n        \"--setuser\", \n        \"ceph\", \n        \"--setgroup\", \n        \"ceph\", \n        \"--mkfs\", \n        \"-i\", \n        \"mkudlej-usm2-mon1\", \n        \"--fsid\", \n        \"7bd7504f-9d01-4fd9-9253-e5bf1fb3215b\", \n        \"--keyring\", \n        \"/var/lib/ceph/tmp/keyring.mon.mkudlej-usm2-mon1\"\n    ], \n    \"delta\": \"0:00:00.041730\", \n    \"end\": \"2017-06-08 08:32:39.886193\", \n    \"failed\": true, \n    \"rc\": 1, \n    \"start\": \"2017-06-08 08:32:39.844463\", \n    \"warnings\": []\n}\n\nSTDERR:\n\n2017-06-08 08:32:39.883226 7fce93ca07c0 -1 unable to find any IP address in networks: 10.70.44.0/22\n\nRUNNING HANDLER [ceph.ceph-common : copy mon restart script] *******************\n\nRUNNING HANDLER [ceph.ceph-common : restart ceph mon daemon(s)] ****************\n\nRUNNING HANDLER [ceph.ceph-common : copy osd restart script] *******************\n\nRUNNING HANDLER [ceph.ceph-common : restart ceph osds daemon(s)] ***************\n\nRUNNING HANDLER [ceph.ceph-common : restart ceph mdss] *************************\n\nRUNNING HANDLER [ceph.ceph-common : restart ceph rgws] *************************\n\nRUNNING HANDLER [ceph.ceph-common : restart ceph nfss] *************************\n\nPLAY RECAP *********************************************************************\n10.70.16.210               : ok=32   changed=12   unreachable=0    failed=1   \n\n",
  "started": "2017-06-08 08:31:41.825202",
  "request": "{\"verbose\": false, \"monitor_secret\": \"AQDwLjlZAAAAABAAzN26Ok57p/A2rCIS9DcWoQ==\", \"cluster_name\": \"test_ceph\", \"host\": \"10.70.16.210\", \"redhat_storage\": false, \"public_network\": \"10.70.44.0/22\", \"address\": \"10.70.16.210\", \"cluster_network\": \"10.70.16.0/24\", \"calamari\": false, \"monitors\": [], \"fsid\": \"7bd7504f-9d01-4fd9-9253-e5bf1fb3215b\"}",





****  "exit_code": 2,  *****








  "ended": "2017-06-08 08:32:40.040458",
  "http_method": "POST",
  "command": "/bin/ansible-playbook -v -u ceph-installer /usr/share/ceph-ansible/site.yml.sample -i /tmp/396e3984-6926-486a-bf54-e19b0f7cbb03_kqBkEs --extra-vars {\"monitor_secret\": \"AQDwLjlZAAAAABAAzN26Ok57p/A2rCIS9DcWoQ==\", \"ceph_stable\": true, \"cluster\": \"test_ceph\", \"redhat_storage\": false, \"public_network\": \"10.70.44.0/22\", \"fetch_directory\": \"/var/lib/ceph-installer/fetch\", \"cluster_network\": \"10.70.16.0/24\", \"calamari\": false, \"monitors\": [], \"fsid\": \"7bd7504f-9d01-4fd9-9253-e5bf1fb3215b\"} --skip-tags package-install",
  "user_agent": null,
  "stderr": "[WARNING]: log file at /var/log/ansible.log is not writeable and we cannot create it, aborting\n\n[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is \ndiscouraged. The module documentation details page may explain more about this \nrationale..\nThis feature will be removed in a future release. Deprecation \nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..\n\nThis feature will be removed in version 2.4. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..\n\nThis feature will be removed in version 2.4. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..\n\nThis feature will be removed in version 2.4. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..\n\nThis feature will be removed in version 2.4. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..\n\nThis feature will be removed in version 2.4. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..\n\nThis feature will be removed in version 2.4. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: ansible.utils.unicode.to_bytes is deprecated.  Use \nansible.module_utils._text.to_bytes instead.\nThis feature will be removed in \nversion 2.4. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: ansible.utils.unicode.to_unicode is deprecated.  Use \nansible.module_utils._text.to_text instead.\nThis feature will be removed in \nversion 2.4. Deprecation warnings can be disabled by setting \ndeprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..\n\nThis feature will be removed in version 2.4. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..\n\nThis feature will be removed in version 2.4. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: always_run is deprecated. Use check_mode = no instead..\n\nThis feature will be removed in version 2.4. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Could not create retry file '/usr/share/ceph-\nansible/site.yml.retry'.         [Errno 13] Permission denied: u'/usr/share\n/ceph-ansible/site.yml.retry'\n",
  "identifier": "396e3984-6926-486a-bf54-e19b0f7cbb03"
}

Comment 4 Nishanth Thomas 2017-06-15 08:58:23 UTC
Are seeing this issue with gluster cluster create? or is it specific case with ceph?

Comment 5 Martin Kudlej 2017-06-15 10:39:18 UTC
Failure was triggered by ceph-installer failure return code so I expect that it is problem of Tendrl reporting based on ceph-installer reporting. Do you think that it can be seen also with Tendrl - gdeploy?

Comment 6 Nishanth Thomas 2017-06-16 04:08:52 UTC
There are chances even though I am not seeing any issues with respect to 'gluster' code. It always good to confirm

Comment 7 Martin Bukatovic 2017-06-16 17:05:56 UTC
Just for the record:

Alternative reproducer
======================

1. When setting up repositories, introduce a mistake when setting up
   yum repositries for ceph osd machines which would prevent packages
   to be successfully installed (eg. enforce gpg check for unsigned
   packages or something similar).

2. Create ceph cluster via tendrl web ui

3. When message about starting ceph-installer task shows up in the
   tendrl ui, monitor ceph-installer task directly via it's rest api:
   
   http://mbukatov-usm1-server.usmqe.example.com:8181/api/tasks/a207ab02-e010-4bed-9c7b-56a168d6d9de/

   Where a207ab02-e010-4bed-9c7b-56a168d6d9de is id of ceph installer task
   which you are expecting to fail (because of problem introduced in step #1)

4. Wait until the ceph-installer task fails, so that `succeeded = false` and
   `ended` contains a timestamp value.

5. Go back to the tendrl ui, there is no information about the problem.

Comment 10 Shubhendu Tripathi 2018-11-19 05:41:47 UTC
This product is EOL now