Bug 1896693
Summary: | [cephadm] 5.0 - Cephadm restart will remove the unmanaged flag set to OSDs | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Preethi <pnataraj> |
Component: | Cephadm | Assignee: | Juan Miguel Olmo <jolmomar> |
Status: | CLOSED ERRATA | QA Contact: | Preethi <pnataraj> |
Severity: | medium | Docs Contact: | Karen Norteman <knortema> |
Priority: | medium | ||
Version: | 5.0 | CC: | sewagner, tserlin, vereddy |
Target Milestone: | --- | ||
Target Release: | 5.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.0-96.el8cp | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-08-30 08:27:12 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Preethi
2020-11-11 10:02:38 UTC
Backport to pacific on-going: https://github.com/ceph/ceph/pull/40922 would be great to have this in z1 pushed to downstream @Juan, Verified the BZ with ceph version 16.2.98 and issue is not seen. Hence, moving the BZ state to verified. [ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph version ceph version 16.2.0-98.el8cp (9c6352ff5276f8fb2029981206f3516707220054) pacific (stable) [ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# [ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph orch apply osd --all-available-devices --unmanaged=true Scheduled osd.all-available-devices update... [ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT alertmanager 2/2 3m ago 10d count:2;label:alertmanager crash 3/3 3m ago 10d * grafana 0/1 - 10d ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e mds.cephfs 2/2 3m ago 10d label:mds mgr 2/2 3m ago 10d label:mgr mon 3/3 3m ago 10d ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e;ceph-threetest-1624873245298-node2-osd-mon-mgr-mds-node-exporte;ceph-threetest-1624873245298-node3-mon-osd-node-exporter-crash node-exporter 3/3 3m ago 10d * osd.all-available-devices 12/15 3m ago 7s <unmanaged> prometheus 1/1 3m ago 10d ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e;count:1 rgw.myrgw 2/2 3m ago 10d ceph-threetest-1624873245298-node2-osd-mon-mgr-mds-node-exporte;ceph-threetest-1624873245298-node3-mon-osd-node-exporter-crash [ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph mgr module disable cephadm [ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph mgr module enable cephadm [ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT alertmanager 2/2 3m ago 10d count:2;label:alertmanager crash 3/3 4m ago 10d * grafana 0/1 - 10d ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e mds.cephfs 2/2 4m ago 10d label:mds mgr 2/2 3m ago 10d label:mgr mon 3/3 4m ago 10d ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e;ceph-threetest-1624873245298-node2-osd-mon-mgr-mds-node-exporte;ceph-threetest-1624873245298-node3-mon-osd-node-exporter-crash node-exporter 3/3 4m ago 10d * osd.all-available-devices 12/15 4m ago 45s <unmanaged> prometheus 1/1 3m ago 10d ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e;count:1 rgw.myrgw 2/2 4m ago 10d ceph-threetest-1624873245298-node2-osd-mon-mgr-mds-node-exporte;ceph-threetest-1624873245298-node3-mon-osd-node-exporter-crash [ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294 |