Description of problem: While adding the OSD, noticed that there is no return error message or alert to covey that OSD is not added. I tried to add the OSD with the bare-name where the hostname is actually a FQDN. The command executed which out providing any output. There is no information about OSD is added or not. Error Snippet: -------------- [ceph: root@depressa008 /]# ceph orch daemon add osd depressa012:/dev/nvme0n1 [ceph: root@depressa008 /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 64.92325 root default -3 21.64108 host depressa008 1 ssd 0.34109 osd.1 up 1.00000 1.00000 4 ssd 0.34109 osd.4 up 1.00000 1.00000 7 ssd 6.98630 osd.7 up 1.00000 1.00000 10 ssd 6.98630 osd.10 up 1.00000 1.00000 13 ssd 6.98630 osd.13 up 1.00000 1.00000 -5 21.64108 host depressa009 0 ssd 0.34109 osd.0 up 1.00000 1.00000 3 ssd 0.34109 osd.3 up 1.00000 1.00000 6 ssd 6.98630 osd.6 up 1.00000 1.00000 9 ssd 6.98630 osd.9 up 1.00000 1.00000 12 ssd 6.98630 osd.12 up 1.00000 1.00000 -7 21.64108 host depressa010 2 ssd 0.34109 osd.2 up 1.00000 1.00000 5 ssd 0.34109 osd.5 up 1.00000 1.00000 8 ssd 6.98630 osd.8 up 1.00000 1.00000 11 ssd 6.98630 osd.11 up 1.00000 1.00000 14 ssd 6.98630 osd.14 up 1.00000 1.00000 [ceph: root@depressa008 /]# After the command "ceph orch daemon add osd depressa012:/dev/nvme0n1" execution there is no error or alert message. Version-Release number of selected component (if applicable): [ceph: root@depressa008 /]# ceph -v ceph version 16.2.0-143.el8cp (0e2c6f9639c37a03e55885fb922dc0cb1b5173cb) pacific (stable) [ceph: root@depressa008 /]# How reproducible: Steps to Reproduce: 1. Configure a cluster 2. Try to add an OSD with the bare name if the node name is FQDN and vice versa Actual results: No message Expected results: Convey with the proper error or alert message. Additional info:
Should be fixed by the PR: https://github.com/ceph/ceph/pull/45217
*** Bug 2069506 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:3623