Bug 1572933
| Summary: | infrastructure-playbooks/shrink-osd.yml leaves behind NVMe partition; scenario non-collocated | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Randy Martinez <r.martinez> | ||||
| Component: | Ceph-Ansible | Assignee: | Guillaume Abrioux <gabrioux> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Vasishta <vashastr> | ||||
| Severity: | medium | Docs Contact: | Erin Donnelly <edonnell> | ||||
| Priority: | medium | ||||||
| Version: | 3.0 | CC: | adeza, anharris, aschoen, ceph-eng-bugs, edonnell, gabrioux, gmeno, hnallurv, jbrier, kdreyer, nthomas, nwatkins, nyewale, pasik, r.martinez, sankarshan, seb, tserlin | ||||
| Target Milestone: | rc | ||||||
| Target Release: | 3.3 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | RHEL: ceph-ansible-3.2.16-1.el7cp Ubuntu: ceph-ansible_3.2.16-2redhat1 Container: rhceph:ceph-3.3-rhel-7-containers-candidate-89086-20190718150813 | Doc Type: | Bug Fix | ||||
| Doc Text: |
.The `shrink-osd.yml` playbook removes partitions from NVMe disks in all situations
Previously, the Ansible playbook `infrastructure-playbooks/shrink-osd.yml` did not properly remove partitions on NVMe devices when used with the `osd_scenario: non-collocated` option in containerized environments. This bug has been fixed with this update, and the playbook removes the partitions as expected.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2019-08-21 15:10:24 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | 1644847 | ||||||
| Bug Blocks: | 1572368, 1629656, 1726135 | ||||||
| Attachments: |
|
||||||
|
Description
Randy Martinez
2018-04-29 03:27:46 UTC
That is not part of the scope of shrink-osd.yml. Is it happening on containerized or non-containerized? Thanks. Non-containerized Hum, I guess this is a bug in ceph-disk then, we use the following line to destroy and zap the OSD:
ceph-disk destroy --cluster {{ cluster }} --destroy-by-id {{ item.0 }} --zap
So I'd assume it's ceph-disk's job to do the cleanup.
I'll see if I can add a task for this since ceph-disk is in the deprecation path.
Josh Durgin pointed out that shrinking clusters is not a common scenario and this should not block the 3.1 release. Re-targeting until we can resolve this. Hi Sebastien, Working fine in non-containerized scenario NVMe partitions are still left behind in containerized scenario. I think that it might be because of https://github.com/ceph/ceph-ansible/blob/stable-3.2/infrastructure-playbooks/shrink-osd.yml#L264 Moving to ASSIGNED state. Regards, Vasishta Shastry QE, Ceph Partion on dedicated device was not removed on non-NVMe device when shrink-osd-ceph-disk.yml was used Moving back to ASSIGNED state. Regards, Vasishta Shastry QE, Ceph Created attachment 1589176 [details]
File contains playbook log, inventory file
Hey Guillaume
> fix will be present in v3.2.16
Did this make it in?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:2538 |