.The `shrink-osd.yml` playbook removes partitions from NVMe disks in all situations
Previously, the Ansible playbook `infrastructure-playbooks/shrink-osd.yml` did not properly remove partitions on NVMe devices when used with the `osd_scenario: non-collocated` option in containerized environments. This bug has been fixed with this update, and the playbook removes the partitions as expected.
Description of problem:
infrastructure-playbooks/shrink-osd.yml does not clean up NVME partitions w/scenario non-collocated.
Version-Release number of selected component (if applicable):
3.0 latest
How reproducible:
100%
Steps to Reproduce:
1. run shrink-osd.yml on non-collocated osd(NVME journal).
Hum, I guess this is a bug in ceph-disk then, we use the following line to destroy and zap the OSD:
ceph-disk destroy --cluster {{ cluster }} --destroy-by-id {{ item.0 }} --zap
So I'd assume it's ceph-disk's job to do the cleanup.
I'll see if I can add a task for this since ceph-disk is in the deprecation path.
Comment 7Ken Dreyer (Red Hat)
2018-07-24 21:59:11 UTC
Josh Durgin pointed out that shrinking clusters is not a common scenario and this should not block the 3.1 release. Re-targeting until we can resolve this.
Partion on dedicated device was not removed on non-NVMe device when shrink-osd-ceph-disk.yml was used
Moving back to ASSIGNED state.
Regards,
Vasishta Shastry
QE, Ceph
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2019:2538