Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1499871

Summary: [Ceph-ansible]: purge-cluster fails, is not zapping the disks
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vidushi Mishra <vimishra>
Component: Ceph-AnsibleAssignee: Sébastien Han <shan>
Status: CLOSED ERRATA QA Contact: Vidushi Mishra <vimishra>
Severity: medium Docs Contact:
Priority: low    
Version: 3.0CC: adeza, anharris, aschoen, ceph-eng-bugs, gmeno, hnallurv, nthomas, sankarshan, vimishra
Target Milestone: rc   
Target Release: 3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.0.2-1 Ubuntu: ceph-ansible-3.0.2-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-05 23:47:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vidushi Mishra 2017-10-09 14:02:55 UTC
Description of problem:
Purge-cluster fails to destroy ceph journal partitions on the OSD nodes. 
Because of this, if we try to create a new ceph cluster on the same nodes then it fails at below task:

TASK [ceph-osd : manually prepare ceph "filestore" non-containerized osd disk(s) with collocated osd data and journal] ********************************************************************************************
failed: [magna0host] (item=[{'_ansible_parsed': True, 'stderr_lines': [], '_ansible_item_result': True, u'end': u'2017-10-09 13:29:17.183807', '_ansible_no_log': False, u'stdout': u'', u'cmd': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", u'rc': 1, 'item': u'/dev/sdb', u'delta': u'0:00:00.036723', u'stderr': u'', u'changed': False, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed_when_result': False, u'start': u'2017-10-09 13:29:17.147084', 'failed': False}, u'/dev/sdb']) => {"changed": true, "cmd": ["ceph-disk", "prepare", "--cluster", "ceph", "--filestore", "/dev/sdb"], "delta": "0:00:00.339455", "end": "2017-10-09 13:29:19.842762", "failed": true, "item": [{"_ansible_item_result": true, "_ansible_no_log": false, "_ansible_parsed": true, "changed": false, "cmd": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "delta": "0:00:00.036723", "end": "2017-10-09 13:29:17.183807", "failed": false, "failed_when_result": false, "invocation": {"module_args": {"_raw_params": "parted --script /dev/sdb print | egrep -sq '^ 1.*ceph'", "_uses_shell": true, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}}, "item": "/dev/sdb", "rc": 1, "start": "2017-10-09 13:29:17.147084", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}, "/dev/sdb"], "rc": 1, "start": "2017-10-09 13:29:19.503307", "stderr": "Could not create partition 2 from 20973568 to 41945087\nError encountered; not saving changes.\n'/sbin/sgdisk --new=2:0:+10240M --change-name=2:ceph journal --partition-guid=2:05c7fca5-456f-4133-96d8-8f4df8621377 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb' failed with status code 4", "stderr_lines": ["Could not create partition 2 from 20973568 to 41945087", "Error encountered; not saving changes.", "'/sbin/sgdisk --new=2:0:+10240M --change-name=2:ceph journal --partition-guid=2:05c7fca5-456f-4133-96d8-8f4df8621377 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb' failed with status code 4"], "stdout": "", "stdout_lines": []}


Version-Release number of selected component (if applicable):

ceph-ansible-3.0.0-0.1.rc9.el7cp.noarch
ansible-2.3.2.0-2.el7.noarch


How reproducible:
2/2

Steps to Reproduce:
1. Purge cluster using ansible-playbook
2. Check on the OSDs if the partitions are removed.
3. Try creating cluster using ansible playbook, it fails as mentioned in the description.

Actual results:
Purge cluster does not destroy all ceph partions from the OSDs

Expected results:
Purge cluster should be able to destroy all ceph partions from the OSDs

Additional Info:
The ansible-playbook run for purge-cluster.yml passes but ceph journal partions are still not removed.

Comment 3 Sébastien Han 2017-10-09 16:35:30 UTC
Currently trying to reproduce this.

Comment 4 Sébastien Han 2017-10-09 16:53:50 UTC
I can't reproduce this, can I get into a setup where this can be reproduced?
Thanks.

Comment 8 Sébastien Han 2017-10-10 08:01:29 UTC
Verified the fix on your machine, thanks for reporting this.

Comment 9 Sébastien Han 2017-10-10 08:16:55 UTC
will be in rc20

Comment 14 Vidushi Mishra 2017-10-26 09:26:51 UTC
OSD partitions are cleared after a purge-cluster.
Moving bug to verified for ceph-ansible-3.0.4-1.el7cp version.

Comment 17 errata-xmlrpc 2017-12-05 23:47:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3387