Description of problem: infrastructure-playbooks/shrink-osd.yml does not clean up NVME partitions w/scenario non-collocated. Version-Release number of selected component (if applicable): 3.0 latest How reproducible: 100% Steps to Reproduce: 1. run shrink-osd.yml on non-collocated osd(NVME journal).
That is not part of the scope of shrink-osd.yml.
Is it happening on containerized or non-containerized? Thanks.
Non-containerized
Hum, I guess this is a bug in ceph-disk then, we use the following line to destroy and zap the OSD: ceph-disk destroy --cluster {{ cluster }} --destroy-by-id {{ item.0 }} --zap So I'd assume it's ceph-disk's job to do the cleanup. I'll see if I can add a task for this since ceph-disk is in the deprecation path.
Josh Durgin pointed out that shrinking clusters is not a common scenario and this should not block the 3.1 release. Re-targeting until we can resolve this.
Hi Sebastien, Working fine in non-containerized scenario NVMe partitions are still left behind in containerized scenario. I think that it might be because of https://github.com/ceph/ceph-ansible/blob/stable-3.2/infrastructure-playbooks/shrink-osd.yml#L264 Moving to ASSIGNED state. Regards, Vasishta Shastry QE, Ceph
Partion on dedicated device was not removed on non-NVMe device when shrink-osd-ceph-disk.yml was used Moving back to ASSIGNED state. Regards, Vasishta Shastry QE, Ceph
Created attachment 1589176 [details] File contains playbook log, inventory file
Hey Guillaume > fix will be present in v3.2.16 Did this make it in?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:2538