Bug 1852777 - [cephadm] 5.0 - Ceph Orch command for OSD add is succesfull but not listing new OSD's until you clean the data in the disk
Summary: [cephadm] 5.0 - Ceph Orch command for OSD add is succesfull but not listing n...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: 5.0
Assignee: Juan Miguel Olmo
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-01 09:51 UTC by Preethi
Modified: 2021-08-30 08:26 UTC (History)
3 users (show)

Fixed In Version: ceph-16.0.0-7209.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:25:57 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1038 0 None None None 2021-08-27 04:50:44 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:26:09 UTC

Description Preethi 2020-07-01 09:51:39 UTC
Description of problem:Ceph Orch command for OSD add is succesfull but not working until you clean the data in the disk. 

We expect a message to be thrown if data is present in the disk given for creating OSDs. Everytime when OSD create is successful but we dont see new OSD's in the cluster map. Hence, We assume and clean the disk manually and retry the command to get the new OSDs.


Version-Release number of selected component (if applicable):
[root@magna122 ubuntu]# cephadm version
INFO:cephadm:Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-49233-20200624143211
ceph version 15.2.3-1.el8cp (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable)
[root@magna122 ubuntu]# 


How reproducible:

1. Install a bootstrap cluster with cephadm and the dashboard service enabled.
2. # cephadm shell 
3. ceph -s reports health ok with OSDs up
4. Perform failed/replaced OSDs 
5. From CLI, Perform the below
ceph orch osd rm 3 ( removed OSD ID#3 from host magna120)
-> OSD removed successfully
6. Remove one more OSD with replace option
ceph orch osd rm 4 --replace (removed OSD ID#4 from host magna120)
Status showing as destroyed in ceph osd tree
7. Now add new OSD which was removed at step5 with the below command
ceph orch daemon add osd magna120:/dev/sdb
8. Observe the behaviour



Actual results:Ceph Orch command for OSD add is succesfull but not working until you clean the data in the disk. 

We expect a message to be thrown if data is present in the disk given for creating OSDs. Everytime when OSD create is successful but we dont see new OSD's in the cluster map. Hence, We assume and clean the disk manually and retry the command to get the new OSDs.


OSD add command executes but OSds are not getting creating

Expected results:
OSDs create should work. If data present, message to be thrown stating data is present in the disk and it needs to be cleaned before replacing them.

Additional info:
[ceph: root@magna122 /]# ceph orch daemon add osd magna120:/dev/sdb

Comment 6 errata-xmlrpc 2021-08-30 08:25:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.