Description of problem:Ceph Orch command for OSD add is succesfull but not working until you clean the data in the disk. We expect a message to be thrown if data is present in the disk given for creating OSDs. Everytime when OSD create is successful but we dont see new OSD's in the cluster map. Hence, We assume and clean the disk manually and retry the command to get the new OSDs. Version-Release number of selected component (if applicable): [root@magna122 ubuntu]# cephadm version INFO:cephadm:Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-49233-20200624143211 ceph version 15.2.3-1.el8cp (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable) [root@magna122 ubuntu]# How reproducible: 1. Install a bootstrap cluster with cephadm and the dashboard service enabled. 2. # cephadm shell 3. ceph -s reports health ok with OSDs up 4. Perform failed/replaced OSDs 5. From CLI, Perform the below ceph orch osd rm 3 ( removed OSD ID#3 from host magna120) -> OSD removed successfully 6. Remove one more OSD with replace option ceph orch osd rm 4 --replace (removed OSD ID#4 from host magna120) Status showing as destroyed in ceph osd tree 7. Now add new OSD which was removed at step5 with the below command ceph orch daemon add osd magna120:/dev/sdb 8. Observe the behaviour Actual results:Ceph Orch command for OSD add is succesfull but not working until you clean the data in the disk. We expect a message to be thrown if data is present in the disk given for creating OSDs. Everytime when OSD create is successful but we dont see new OSD's in the cluster map. Hence, We assume and clean the disk manually and retry the command to get the new OSDs. OSD add command executes but OSds are not getting creating Expected results: OSDs create should work. If data present, message to be thrown stating data is present in the disk and it needs to be cleaned before replacing them. Additional info: [ceph: root@magna122 /]# ceph orch daemon add osd magna120:/dev/sdb
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294