Bug 1572933

Summary: infrastructure-playbooks/shrink-osd.yml leaves behind NVMe partition; scenario non-collocated
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Randy Martinez <r.martinez>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact: Erin Donnelly <edonnell>
Priority: medium    
Version: 3.0CC: adeza, anharris, aschoen, ceph-eng-bugs, edonnell, gabrioux, gmeno, hnallurv, jbrier, kdreyer, nthomas, nwatkins, nyewale, pasik, r.martinez, sankarshan, seb, tserlin
Target Milestone: rc   
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.16-1.el7cp Ubuntu: ceph-ansible_3.2.16-2redhat1 Container: rhceph:ceph-3.3-rhel-7-containers-candidate-89086-20190718150813 Doc Type: Bug Fix
Doc Text:
.The `shrink-osd.yml` playbook removes partitions from NVMe disks in all situations Previously, the Ansible playbook `infrastructure-playbooks/shrink-osd.yml` did not properly remove partitions on NVMe devices when used with the `osd_scenario: non-collocated` option in containerized environments. This bug has been fixed with this update, and the playbook removes the partitions as expected.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-08-21 15:10:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1644847    
Bug Blocks: 1572368, 1629656, 1726135    
Attachments:
Description Flags
File contains playbook log, inventory file none

Description Randy Martinez 2018-04-29 03:27:46 UTC
Description of problem:

infrastructure-playbooks/shrink-osd.yml does not clean up NVME partitions w/scenario non-collocated.

Version-Release number of selected component (if applicable):
3.0 latest

How reproducible:
100%

Steps to Reproduce:
1. run shrink-osd.yml on non-collocated osd(NVME journal).

Comment 3 seb 2018-07-18 14:22:54 UTC
That is not part of the scope of shrink-osd.yml.

Comment 4 seb 2018-07-18 14:23:50 UTC
Is it happening on containerized or non-containerized?
Thanks.

Comment 5 Randy Martinez 2018-07-18 16:15:35 UTC
Non-containerized

Comment 6 seb 2018-07-19 11:58:12 UTC
Hum, I guess this is a bug in ceph-disk then, we use the following line to destroy and zap the OSD:

ceph-disk destroy --cluster {{ cluster }} --destroy-by-id {{ item.0 }} --zap

So I'd assume it's ceph-disk's job to do the cleanup.
I'll see if I can add a task for this since ceph-disk is in the deprecation path.

Comment 7 Ken Dreyer (Red Hat) 2018-07-24 21:59:11 UTC
Josh Durgin pointed out that shrinking clusters is not a common scenario and this should not block the 3.1 release. Re-targeting until we can resolve this.

Comment 11 Vasishta 2018-11-16 10:40:07 UTC
Hi Sebastien, 

Working fine in non-containerized scenario 
NVMe partitions are still left behind in containerized scenario.

I think that it might be because of 
https://github.com/ceph/ceph-ansible/blob/stable-3.2/infrastructure-playbooks/shrink-osd.yml#L264

Moving to ASSIGNED state.


Regards,
Vasishta Shastry
QE, Ceph

Comment 18 Vasishta 2019-07-10 17:36:34 UTC
Partion on dedicated device was not removed on non-NVMe device when shrink-osd-ceph-disk.yml was used
Moving back to ASSIGNED state.

Regards,
Vasishta Shastry
QE, Ceph

Comment 19 Vasishta 2019-07-10 17:39:42 UTC
Created attachment 1589176 [details]
File contains playbook log, inventory file

Comment 20 Noah Watkins 2019-07-15 20:40:04 UTC
Hey Guillaume

> fix will be present in v3.2.16

Did this make it in?

Comment 27 errata-xmlrpc 2019-08-21 15:10:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2538