Description of problem: New call that will zap based on an OSD ID. When we want to zap an OSD, not necessarily a block device. The idea would be to do "ceph-volume lvm zap 0", where 0 is the OSD ID. ceph-volume will then: * scan the OSD * find all the associated lvs to that OSD * remove all the LVs Bonus, if there are no other LV present on the VG/PV the OSD was removed from we also purge the VG and PV. If they are LVs remaining we don't do anything.
After discussing this further with Alfredo, we agreed to remove the support of --osd-id and only keep --osd-fsid. Indeed, using --osd-fsid is much safer in the context of containers. If someone deploys multiple clusters on the same machine we could accidentally remove all the OSD having the same ID. If ceph-volume detects multiple OSD ID it will remove all of them where we only want to remove a single one (the one from the cluster we targetted). So using osd-fsid will allow us to precisely match the right OSD in all cases.
@Sebastien, could you please confirm that we wouldn't require this BZ to track the feature request to drop --osd-id in favor of --osd-fsid since it isn't possible to run multiple clusters on the same machine? (http://lists.ceph.com/pipermail/ceph-ansible-ceph.com/2019-January/000249.html)
Alfredo, correct, we don't need to track the feature request to drop --osd-id in favor of --osd-fsid. Even though ceph-ansible cannot do that, Rook can, so there is still a desire to drop --osd-id in favor of --osd-fsid. Thanks.
Thanks a lo Alfredo. Based on inputs in Comment 14 and Comment 15 , I'm moving this BZ to VERIFIED state. I've opened new BZ 1738379 to address issue mentioned in Comment 13 . Taking off needinfo flags, feel free to update if there are any. Regards, Vasishta Shastry QE, Ceph
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:2538