Bug 2016949 - [RADOS]: OSD add command has no return error/alert message to convey OSD not added with wrong hostname
Summary: [RADOS]: OSD add command has no return error/alert message to convey OSD n...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 6.1
Assignee: Redouane Kachach Elhichou
QA Contact: Vinayak Papnoi
Akash Raj
URL:
Whiteboard:
: 2069506 (view as bug list)
Depends On: 2180567
Blocks: 2192813
TreeView+ depends on / blocked
 
Reported: 2021-10-25 09:15 UTC by skanta
Modified: 2023-06-15 09:16 UTC (History)
13 users (show)

Fixed In Version: ceph-17.2.6-5.el9cp
Doc Type: Enhancement
Doc Text:
.`ceph orch daemon add osd` now reports if the hostname specified for deploying the OSD is unknown Previously, since the `ceph orch daemon add osd` command gave no output, users would not notice if the hostname was incorrect. Due to this, Cephadm would discard the command. With this release, the `ceph orch daemon add osd` command reports to the user if the hostname specified for deploying the OSD on is unknown.
Clone Of:
Environment:
Last Closed: 2023-06-15 09:15:29 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-2093 0 None None None 2021-10-25 09:20:12 UTC
Red Hat Product Errata RHSA-2023:3623 0 None None None 2023-06-15 09:16:01 UTC

Description skanta 2021-10-25 09:15:52 UTC
Description of problem:

   While adding the OSD, noticed that there is no return error message or alert to covey that OSD is not added.

I tried to add the OSD with the bare-name where the hostname is actually a FQDN. The command executed which out providing any output. There is no information about OSD is added or not.

Error Snippet:
--------------
[ceph: root@depressa008 /]# ceph orch daemon add osd depressa012:/dev/nvme0n1
[ceph: root@depressa008 /]# ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME             STATUS  REWEIGHT  PRI-AFF
-1         64.92325  root default                                   
-3         21.64108      host depressa008                           
 1    ssd   0.34109          osd.1             up   1.00000  1.00000
 4    ssd   0.34109          osd.4             up   1.00000  1.00000
 7    ssd   6.98630          osd.7             up   1.00000  1.00000
10    ssd   6.98630          osd.10            up   1.00000  1.00000
13    ssd   6.98630          osd.13            up   1.00000  1.00000
-5         21.64108      host depressa009                           
 0    ssd   0.34109          osd.0             up   1.00000  1.00000
 3    ssd   0.34109          osd.3             up   1.00000  1.00000
 6    ssd   6.98630          osd.6             up   1.00000  1.00000
 9    ssd   6.98630          osd.9             up   1.00000  1.00000
12    ssd   6.98630          osd.12            up   1.00000  1.00000
-7         21.64108      host depressa010                           
 2    ssd   0.34109          osd.2             up   1.00000  1.00000
 5    ssd   0.34109          osd.5             up   1.00000  1.00000
 8    ssd   6.98630          osd.8             up   1.00000  1.00000
11    ssd   6.98630          osd.11            up   1.00000  1.00000
14    ssd   6.98630          osd.14            up   1.00000  1.00000
[ceph: root@depressa008 /]#


After the command "ceph orch daemon add osd depressa012:/dev/nvme0n1" execution there is no error or alert message.


Version-Release number of selected component (if applicable):

[ceph: root@depressa008 /]# ceph -v
ceph version 16.2.0-143.el8cp (0e2c6f9639c37a03e55885fb922dc0cb1b5173cb) pacific (stable)
[ceph: root@depressa008 /]#


How reproducible:

Steps to Reproduce:
1. Configure a cluster
2. Try to add an OSD with the bare name if the node name is FQDN and vice versa


Actual results:
   
   No message

Expected results:

Convey with the proper error or alert message.


Additional info:

Comment 2 Redouane Kachach Elhichou 2022-07-04 12:14:04 UTC
Should be fixed by the PR: https://github.com/ceph/ceph/pull/45217

Comment 3 Adam King 2022-09-27 03:06:50 UTC
*** Bug 2069506 has been marked as a duplicate of this bug. ***

Comment 22 errata-xmlrpc 2023-06-15 09:15:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623


Note You need to log in before you can comment on or make changes to this bug.