Bug 2253832 - [cephadm] cephadm takes osd service name as osd which doesn't have any spec file also service not got created
Summary: [cephadm] cephadm takes osd service name as osd which doesn't have any spec f...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 8.1
Assignee: Kushal Deb
QA Contact: Aditya Ramteke
URL:
Whiteboard:
: 2279839 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-12-10 05:49 UTC by Sunil Angadi
Modified: 2025-06-26 12:10 UTC (History)
5 users (show)

Fixed In Version: ceph-19.2.1-86.el9cp
Doc Type: Bug Fix
Doc Text:
Cause: Cephadm is taking osd service name as osd, a dumping ground for all OSDs made that aren't attached to an OSD spec. So, it isn't really a service. Consequence: We couldn't run any service related commands (rm, start, stop, restart etc) Fix: - All OSDs created via ceph orch daemon add osd are now associated with a real, valid OSD spec. - This spec is now properly saved and managed within Cephadm, making all standard ceph orch operations possible on these OSDs. Result: # ceph orch daemon add osd ceph-ceph-volume-reg-0c9m6b-node1-installer:/dev/vdb Created osd(s) 0 on host 'ceph-ceph-volume-reg-0c9m6b-node1-installer' # ceph orch ls osd NAME PORTS RUNNING REFRESHED AGE PLACEMENT osd.default 1 17s ago 53s ceph-ceph-volume-reg-0c9m6b-node1-installer # ceph osd stat 1 osds: 1 up (since 68s), 1 in (since 95s); epoch: e9 # ceph orch apply osd --all-available-devices Scheduled osd.all-available-devices update... # ceph orch ls osd NAME PORTS RUNNING REFRESHED AGE PLACEMENT osd.all-available-devices 17 50s ago 3m * osd.default 1 49s ago 12m ceph-ceph-volume-reg-0c9m6b-node1-installer
Clone Of:
Environment:
Last Closed: 2025-06-26 12:10:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-8019 0 None None None 2023-12-10 05:50:09 UTC
Red Hat Product Errata RHSA-2025:9775 0 None None None 2025-06-26 12:10:34 UTC

Description Sunil Angadi 2023-12-10 05:49:17 UTC
Description of problem:
cephadm is taking osd service name as osd, a dumping ground for all OSDs made that aren't attached to an OSD spec. So it isn't really a service, doesn't have a spec
but it's listing in 

[root@ceph-spr-cpm3gh-node4 ~]# ceph orch  ls osd
NAME                       PORTS  RUNNING  REFRESHED  AGE  PLACEMENT    
osd                                     3  10m ago    -    <unmanaged>  
osd.all-available-devices               6  10m ago    2h   <unmanaged>  

but while restart command it's failing as below

[root@ceph-spr-cpm3gh-node4 ~]# ceph orch restart osd
Error EINVAL: Invalid service name "osd". View currently running services using "ceph orch ls"it's a bug right?
cephadm is taking osd service name as osd but while restart command it's failing

hence requesting only to list active and running osd services in 
ceph orch ls osd


Version-Release number of selected component (if applicable):
ceph version 17.2.6-167.el9cp (5ef1496ea3e9daaa9788809a172bd5a1c3192cf7) quincy (stable)

How reproducible:
tried twice

Steps to Reproduce:
1.deploy ceph 6.1 using cephadm
2.use provided spec file to deploy ceph cluster provided in additional info
3.even service fails to deploy with service name "osd"
4.it's getting listed in ceph orch ls osd

Actual results:
[root@ceph-spr-cpm3gh-node4 ~]# ceph orch ls osd
NAME                       PORTS  RUNNING  REFRESHED  AGE  PLACEMENT    
osd                                     3  4m ago     -    <unmanaged>  
osd.all-available-devices               6  4m ago     3h   <unmanaged>  

[root@ceph-spr-cpm3gh-node4 ~]# ceph orch restart osd
Error EINVAL: Invalid service name "osd". View currently running services using "ceph orch ls"


Please check log

2023-12-08T09:43:05.189114+0000 mgr.ceph-spr-cpm3gh-node1-installer.wfhbhf (mgr.14219) 5974 : cephadm [ERR] Invalid service name "osd". View currently running services using "ceph orch ls"
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 125, in wrapper
    return OrchResult(f(*args, **kwargs))
  File "/usr/share/ceph/mgr/cephadm/module.py", line 2033, in service_action
    raise OrchestratorError(f'Invalid service name "{service_name}".'
orchestrator._interface.OrchestratorError: Invalid service name "osd". View currently running services using "ceph orch ls"
2023-12-08T09:53:37.348836+0000 mgr.ceph-spr-cpm3gh-node1-installer.wfhbhf (mgr.14219) 6305 : cephadm [ERR] Invalid service name "osd". View currently running services using "ceph orch ls"

[root@ceph-spr-cpm3gh-node4 ~]# ceph orch ls
NAME                       PORTS        RUNNING  REFRESHED  AGE  PLACEMENT    
alertmanager               ?:9093,9094      1/1  10m ago    23h  count:1      
ceph-exporter                               3/3  10m ago    23h  *            
crash                                       3/3  10m ago    23h  *            
grafana                    ?:3000           1/1  10m ago    23h  count:1      
mgr                                         2/2  10m ago    23h  label:mgr    
mon                                         3/3  10m ago    23h  label:mon    
node-exporter              ?:9100           3/3  10m ago    23h  *            
osd                                           4  9m ago     -    <unmanaged>  
osd.all-available-devices                     5  10m ago    22h  <unmanaged>  
prometheus                 ?:9095           1/1  10m ago    23h  count:1      


Expected results:
list only services that got created successfully with active and running.
if cephadm doesn't want user to specify osd service_name as "osd" like specified in spec file provide that error message to users while applying spec command.

Additional info:

[root@ceph-spr-cpm3gh-node4 ~]# ceph orch ls --export
service_type: alertmanager
service_name: alertmanager
placement:
  count: 1
---
service_type: ceph-exporter
service_name: ceph-exporter
placement:
  host_pattern: '*'
spec:
  prio_limit: 5
  stats_period: 5
---
service_type: crash
service_name: crash
placement:
  host_pattern: '*'
---
service_type: grafana
service_name: grafana
placement:
  count: 1
---
service_type: mgr
service_name: mgr
placement:
  label: mgr
---
service_type: mon
service_name: mon
placement:
  label: mon
---
service_type: node-exporter
service_name: node-exporter
placement:
  host_pattern: '*'
---
service_type: osd
service_name: osd
unmanaged: true
spec:
  filter_logic: AND
  objectstore: bluestore
---
service_type: osd
service_id: all-available-devices
service_name: osd.all-available-devices
placement:
  host_pattern: '*'
unmanaged: true
spec:
  data_devices:
    all: true
  filter_logic: AND
  objectstore: bluestore
---
service_type: prometheus
service_name: prometheus
placement:
  count: 1

Comment 1 Pawan 2024-05-09 06:05:12 UTC
*** Bug 2279839 has been marked as a duplicate of this bug. ***

Comment 17 errata-xmlrpc 2025-06-26 12:10:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:9775


Note You need to log in before you can comment on or make changes to this bug.