Bug 1898196
Summary: | [cephadm] 5.0 - osd.None an unknown service is created and displayed in ceph orch ls command | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Preethi <pnataraj> | ||||
Component: | Cephadm | Assignee: | Juan Miguel Olmo <jolmomar> | ||||
Status: | CLOSED ERRATA | QA Contact: | Vasishta <vashastr> | ||||
Severity: | high | Docs Contact: | Karen Norteman <knortema> | ||||
Priority: | medium | ||||||
Version: | 5.0 | CC: | kdreyer, tserlin, vereddy | ||||
Target Milestone: | --- | ||||||
Target Release: | 5.0 | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | ceph-16.2.0-28.el8cp | Doc Type: | No Doc Update | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2021-08-30 08:27:12 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Preethi
2020-11-16 16:05:56 UTC
Waiting backport to Pacific. @Juan, We still see the issue in the latest alpha [ceph: root@ceph-sunil1adm-1614692246522-node1-mon-mgr-installer-node-expor /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 2/2 6m ago 5d ceph-sunil1adm-1614692246522-node2-mon-mds-node-exporter-alertm;ceph-sunil1adm-1614692246522-node1-mon-mgr-installer-node-expor registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5 3ea01d72d22c crash 0/8 - - <unmanaged> <unknown> <unknown> grafana 1/1 6m ago 5d ceph-sunil1adm-1614692246522-node1-mon-mgr-installer-node-expor registry.redhat.io/rhceph-alpha/rhceph-5-dashboard-rhel8:latest bd3d7748747b iscsi.iscsi 1/1 6m ago 5d ceph-sunil1adm-1614692246522-node8-client-nfs-node-exporter-isc registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:687d060d91102d9317fbd8ec0305a24f53bdfb86d7ca3aaacc664955da01f03f 700feae6f592 mds.cephfs 2/2 6m ago 5d ceph-sunil1adm-1614692246522-node2-mon-mds-node-exporter-alertm;ceph-sunil1adm-1614692246522-node8-client-nfs-node-exporter-isc;count:2 registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:687d060d91102d9317fbd8ec0305a24f53bdfb86d7ca3aaacc664955da01f03f 700feae6f592 mgr 1/1 6m ago 5d ceph-sunil1adm-1614692246522-node1-mon-mgr-installer-node-expor registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-21981-20210302003306 700feae6f592 mon 3/3 6m ago 5d label:mon mix 700feae6f592 node-exporter 8/8 6m ago 5d * registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5 a6af8f87dd4a osd.None 1/0 6m ago - <unmanaged> registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:687d060d91102d9317fbd8ec0305a24f53bdfb86d7ca3aaacc664955da01f03f 700feae6f592 osd.all-available-devices 7/9 6m ago 3d <unmanaged> registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:687d060d91102d9317fbd8ec0305a24f53bdfb86d7ca3aaacc664955da01f03f 700feae6f592 osd.dashboard-admin123-1614929231097 2/10 6m ago 3d * registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:687d060d91102d9317fbd8ec0305a24f53bdfb86d7ca3aaacc664955da01f03f mix prometheus 1/1 6m ago 5d ceph-sunil1adm-1614692246522-node1-mon-mgr-installer-node-expor registry.redhat.io/openshift4/ose-prometheus:v4.6 6050e785b668 I could see the issue in the below clusters magna021 root/q 10.0.210.149 cephuser/cephuser Probably, we can see the issue after setting the unmanaged flag to true after deploying the osd all available service daemon. "osd:None" services appear after creating OSDs using "ceph orch daemon add osd host:device" "osd.None" is a simulated service created to show "something" for OSD daemons created with "daemon add". This is the reason why they can not be deleted. (these kind of services really do not exist). As a workaround, until modification to avoid "simulated" services will be merged: In order to remove "osd.None" service, it is needed to remove all OSDs daemons associated with the simulated service. Use "ceph orch ls osd --format yaml" to get the list of devices that are used by the "osd.None" service. Use "ceph device ls" to find the osd ids of these devices. USe "ceph orch osd rm <id>" to remove the associated daemons. Once associated OSDs will be removed the "osd.None" service will disappear. Backport to pacific on-going: https://github.com/ceph/ceph/pull/40746 @Juan, Verified with ceph version 16.2.0-34.el8cp. Issue is not seen. Hence, moving this to verified state. [ceph: root@ceph-5x-doc-mgowri-1620728264715-node1-mon-mgr-installer /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT alertmanager 1/1 4m ago 6d ceph-5x-doc-mgowri-1620728264715-node1-mon-mgr-installer crash 7/7 8m ago 7d * grafana 1/1 4m ago 6d ceph-5x-doc-mgowri-1620728264715-node1-mon-mgr-installer mds.fs_name 2/2 4m ago 22h ceph-5x-doc-mgowri-1620728264715-node1-mon-mgr-installer;ceph-5x-doc-mgowri-1620728264715-node7-rgw-mgr mgr 3/3 6m ago 23h ceph-5x-doc-mgowri-1620728264715-node1-mon-mgr-installer;ceph-5x-doc-mgowri-1620728264715-node2-mon-mgr-rgw;ceph-5x-doc-mgowri-1620728264715-node7-rgw-mgr mon 3/3 6m ago 22h ceph-5x-doc-mgowri-1620728264715-node1-mon-mgr-installer;ceph-5x-doc-mgowri-1620728264715-node2-mon-mgr-rgw;ceph-5x-doc-mgowri-1620728264715-node6-mon-rgw;count:3 nfs.foo 2/2 6m ago 4h ceph-5x-doc-mgowri-1620728264715-node2-mon-mgr-rgw;ceph-5x-doc-mgowri-1620728264715-node7-rgw-mgr;count:2 node-exporter 7/7 8m ago 6d * osd.all-available-devices 12/19 8m ago 23h <unmanaged> prometheus 1/1 4m ago 6d ceph-5x-doc-mgowri-1620728264715-node1-mon-mgr-installer rgw.foo 6/6 6m ago 23h count-per-host:2;label:rgw rgw.test 2/2 6m ago 23h ceph-5x-doc-mgowri-1620728264715-node2-mon-mgr-rgw;ceph-5x-doc-mgowri-1620728264715-node6-mon-rgw;count:2 [ceph: root@ceph-5x-doc-mgowri-1620728264715-node1-mon-mgr-installer /]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294 |