Bug 1852777

Summary: [cephadm] 5.0 - Ceph Orch command for OSD add is succesfull but not listing new OSD's until you clean the data in the disk
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Preethi <pnataraj>
Component: CephadmAssignee: Juan Miguel Olmo <jolmomar>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: low Docs Contact: Karen Norteman <knortema>
Priority: unspecified    
Version: 5.0CC: sewagner, tserlin, vereddy
Target Milestone: ---   
Target Release: 5.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-16.0.0-7209.el8cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:25:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Preethi 2020-07-01 09:51:39 UTC
Description of problem:Ceph Orch command for OSD add is succesfull but not working until you clean the data in the disk. 

We expect a message to be thrown if data is present in the disk given for creating OSDs. Everytime when OSD create is successful but we dont see new OSD's in the cluster map. Hence, We assume and clean the disk manually and retry the command to get the new OSDs.


Version-Release number of selected component (if applicable):
[root@magna122 ubuntu]# cephadm version
INFO:cephadm:Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-49233-20200624143211
ceph version 15.2.3-1.el8cp (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)
[root@magna122 ubuntu]# 


How reproducible:

1. Install a bootstrap cluster with cephadm and the dashboard service enabled.
2. # cephadm shell 
3. ceph -s reports health ok with OSDs up
4. Perform failed/replaced OSDs 
5. From CLI, Perform the below
ceph orch osd rm 3 ( removed OSD ID#3 from host magna120)
-> OSD removed successfully
6. Remove one more OSD with replace option
ceph orch osd rm 4 --replace (removed OSD ID#4 from host magna120)
Status showing as destroyed in ceph osd tree
7. Now add new OSD which was removed at step5 with the below command
ceph orch daemon add osd magna120:/dev/sdb
8. Observe the behaviour



Actual results:Ceph Orch command for OSD add is succesfull but not working until you clean the data in the disk. 

We expect a message to be thrown if data is present in the disk given for creating OSDs. Everytime when OSD create is successful but we dont see new OSD's in the cluster map. Hence, We assume and clean the disk manually and retry the command to get the new OSDs.


OSD add command executes but OSds are not getting creating

Expected results:
OSDs create should work. If data present, message to be thrown stating data is present in the disk and it needs to be cleaned before replacing them.

Additional info:
[ceph: root@magna122 /]# ceph orch daemon add osd magna120:/dev/sdb

Comment 6 errata-xmlrpc 2021-08-30 08:25:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294