Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2016949

Summary: [RADOS]: OSD add command has no return error/alert message to convey OSD not added with wrong hostname
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: skanta
Component: CephadmAssignee: Redouane Kachach Elhichou <rkachach>
Status: CLOSED ERRATA QA Contact: Vinayak Papnoi <vpapnoi>
Severity: high Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 5.0CC: adking, akraj, akupczyk, bhubbard, ceph-eng-bugs, kdreyer, msaini, nojha, rkachach, rzarzyns, sseshasa, vpapnoi, vumrao
Target Milestone: ---   
Target Release: 6.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.6-5.el9cp Doc Type: Enhancement
Doc Text:
.`ceph orch daemon add osd` now reports if the hostname specified for deploying the OSD is unknown Previously, since the `ceph orch daemon add osd` command gave no output, users would not notice if the hostname was incorrect. Due to this, Cephadm would discard the command. With this release, the `ceph orch daemon add osd` command reports to the user if the hostname specified for deploying the OSD on is unknown.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-06-15 09:15:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2180567    
Bug Blocks: 2192813    

Description skanta 2021-10-25 09:15:52 UTC
Description of problem:

   While adding the OSD, noticed that there is no return error message or alert to covey that OSD is not added.

I tried to add the OSD with the bare-name where the hostname is actually a FQDN. The command executed which out providing any output. There is no information about OSD is added or not.

Error Snippet:
--------------
[ceph: root@depressa008 /]# ceph orch daemon add osd depressa012:/dev/nvme0n1
[ceph: root@depressa008 /]# ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME             STATUS  REWEIGHT  PRI-AFF
-1         64.92325  root default                                   
-3         21.64108      host depressa008                           
 1    ssd   0.34109          osd.1             up   1.00000  1.00000
 4    ssd   0.34109          osd.4             up   1.00000  1.00000
 7    ssd   6.98630          osd.7             up   1.00000  1.00000
10    ssd   6.98630          osd.10            up   1.00000  1.00000
13    ssd   6.98630          osd.13            up   1.00000  1.00000
-5         21.64108      host depressa009                           
 0    ssd   0.34109          osd.0             up   1.00000  1.00000
 3    ssd   0.34109          osd.3             up   1.00000  1.00000
 6    ssd   6.98630          osd.6             up   1.00000  1.00000
 9    ssd   6.98630          osd.9             up   1.00000  1.00000
12    ssd   6.98630          osd.12            up   1.00000  1.00000
-7         21.64108      host depressa010                           
 2    ssd   0.34109          osd.2             up   1.00000  1.00000
 5    ssd   0.34109          osd.5             up   1.00000  1.00000
 8    ssd   6.98630          osd.8             up   1.00000  1.00000
11    ssd   6.98630          osd.11            up   1.00000  1.00000
14    ssd   6.98630          osd.14            up   1.00000  1.00000
[ceph: root@depressa008 /]#


After the command "ceph orch daemon add osd depressa012:/dev/nvme0n1" execution there is no error or alert message.


Version-Release number of selected component (if applicable):

[ceph: root@depressa008 /]# ceph -v
ceph version 16.2.0-143.el8cp (0e2c6f9639c37a03e55885fb922dc0cb1b5173cb) pacific (stable)
[ceph: root@depressa008 /]#


How reproducible:

Steps to Reproduce:
1. Configure a cluster
2. Try to add an OSD with the bare name if the node name is FQDN and vice versa


Actual results:
   
   No message

Expected results:

Convey with the proper error or alert message.


Additional info:

Comment 2 Redouane Kachach Elhichou 2022-07-04 12:14:04 UTC
Should be fixed by the PR: https://github.com/ceph/ceph/pull/45217

Comment 3 Adam King 2022-09-27 03:06:50 UTC
*** Bug 2069506 has been marked as a duplicate of this bug. ***

Comment 22 errata-xmlrpc 2023-06-15 09:15:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623