Bug 2253832
Summary: | [cephadm] cephadm takes osd service name as osd which doesn't have any spec file also service not got created | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Sunil Angadi <sangadi> |
Component: | Cephadm | Assignee: | Kushal Deb <kdeb> |
Status: | CLOSED ERRATA | QA Contact: | Aditya Ramteke <aramteke> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 6.1 | CC: | cephqe-warriors, kdeb, pdhiran, saraut, tserlin |
Target Milestone: | --- | ||
Target Release: | 8.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-19.2.1-86.el9cp | Doc Type: | Bug Fix |
Doc Text: |
Cause: Cephadm is taking osd service name as osd, a dumping ground for all OSDs made that aren't attached to an OSD spec. So, it isn't really a service.
Consequence: We couldn't run any service related commands (rm, start, stop, restart etc)
Fix:
- All OSDs created via ceph orch daemon add osd are now associated with a real, valid OSD spec.
- This spec is now properly saved and managed within Cephadm, making all standard ceph orch operations possible on these OSDs.
Result:
# ceph orch daemon add osd ceph-ceph-volume-reg-0c9m6b-node1-installer:/dev/vdb
Created osd(s) 0 on host 'ceph-ceph-volume-reg-0c9m6b-node1-installer'
# ceph orch ls osd
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
osd.default 1 17s ago 53s ceph-ceph-volume-reg-0c9m6b-node1-installer
# ceph osd stat
1 osds: 1 up (since 68s), 1 in (since 95s); epoch: e9
# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
# ceph orch ls osd
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
osd.all-available-devices 17 50s ago 3m *
osd.default 1 49s ago 12m ceph-ceph-volume-reg-0c9m6b-node1-installer
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2025-06-26 12:10:32 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Sunil Angadi
2023-12-10 05:49:17 UTC
*** Bug 2279839 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2025:9775 |