Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2017:3387
Description of problem: Purge-cluster fails to destroy ceph journal partitions on the OSD nodes. Because of this, if we try to create a new ceph cluster on the same nodes then it fails at below task: TASK [ceph-osd : manually prepare ceph "filestore" non-containerized osd disk(s) with collocated osd data and journal] ******************************************************************************************** failed: [magna0host] (item=[{'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2017-10-09 13:29:17.183807', '_ansible_no_log': False, u'stdout': u'', u'cmd': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", u'rc': 1, 'item': u'/dev/sdb', u'delta': u'0:00:00.036723', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed_when_result': False, u'start': u'2017-10-09 13:29:17.147084', 'failed': False}, u'/dev/sdb']) => {"changed": true, "cmd": ["ceph-disk", "prepare", "--cluster", "ceph", "--filestore", "/dev/sdb"], "delta": "0:00:00.339455", "end": "2017-10-09 13:29:19.842762", "failed": true, "item": [{"_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "cmd": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "delta": "0:00:00.036723", "end": "2017-10-09 13:29:17.183807", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}}, "item": "/dev/sdb", "rc": 1, "start": "2017-10-09 13:29:17.147084", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}, "/dev/sdb"], "rc": 1, "start": "2017-10-09 13:29:19.503307", "stderr": "Could not create partition 2 from 20973568 to 41945087\nError encountered; not saving changes.\n'/sbin/sgdisk --new=2:0:+10240M --change-name=2:ceph journal --partition-guid=2:05c7fca5-456f-4133-96d8-8f4df8621377 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb' failed with status code 4", "stderr_lines": ["Could not create partition 2 from 20973568 to 41945087", "Error encountered; not saving changes.", "'/sbin/sgdisk --new=2:0:+10240M --change-name=2:ceph journal --partition-guid=2:05c7fca5-456f-4133-96d8-8f4df8621377 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb' failed with status code 4"], "stdout": "", "stdout_lines": []} Version-Release number of selected component (if applicable): ceph-ansible-3.0.0-0.1.rc9.el7cp.noarch ansible-2.3.2.0-2.el7.noarch How reproducible: 2/2 Steps to Reproduce: 1. Purge cluster using ansible-playbook 2. Check on the OSDs if the partitions are removed. 3. Try creating cluster using ansible playbook, it fails as mentioned in the description. Actual results: Purge cluster does not destroy all ceph partions from the OSDs Expected results: Purge cluster should be able to destroy all ceph partions from the OSDs Additional Info: The ansible-playbook run for purge-cluster.yml passes but ceph journal partions are still not removed.