* Description of problem: Running 'ceph osd in any' on a Ceph cluster marks all historic osd's 'in' which were once part of Ceph cluster. * Version-Release number of selected component (if applicable): RHCS 3.2z1 * How reproducible: Always * Steps to Reproduce: 1. Run the following commands to remove an osd permanently from the Ceph cluster : - systemctl stop ceph-osd@1 - ceph osd out 1 - ceph osd crush remove osd.1 - ceph auth del osd.1 - ceph osd rm 1 - umount /var/lib/ceph/osd/ceph-1 - ceph-disk zap /dev/sdb (remove Ceph partition from the device as well) 2. After some time run command : # ceph osd in any 3. The osd.1 is marked 'down+in' * Actual results: osd's are marked 'in' even if they are removed permanently * Expected results: If the osd is removed permanently it should not be started. * Additional info: NA
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2019:2538