Bug 1896693 - [cephadm] 5.0 - Cephadm restart will remove the unmanaged flag set to OSDs
Summary: [cephadm] 5.0 - Cephadm restart will remove the unmanaged flag set to OSDs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 5.0
Assignee: Juan Miguel Olmo
QA Contact: Preethi
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-11 10:02 UTC by Preethi
Modified: 2021-08-30 08:27 UTC (History)
3 users (show)

Fixed In Version: ceph-16.2.0-96.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:27:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 49805 0 None None None 2021-03-15 13:09:49 UTC
Red Hat Issue Tracker RHCEPH-367 0 None None None 2021-08-19 05:07:53 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:27:26 UTC

Description Preethi 2020-11-11 10:02:38 UTC
Description of problem:Cephadm restart will remove the unmanaged flag set to OSDs


Version-Release number of selected component (if applicable):
[root@magna094 ubuntu]# ./cephadm version
Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445
ceph version 16.0.0-6275.el8cp (d1e0606106224ac333f1c245150d7484cb626841) pacific (dev)

[root@magna094 ubuntu]# rpm -qa |grep cephadm
cephadm-16.0.0-6817.el8cp.x86_64

How reproducible:


Steps to Reproduce:
1. Have a 5.0 cluster installed with cephadm with dashboard enabled
2. Deploy all osd available devices daemon service with unmanaged flag set to true
3. Check ceph orch ls | grep osd 
4. restart cephadm and observe the behaviour in ceph orch ls command

Actual results: cephadm restart changing the OSds settings

Expected results: cephadm restart should not changed the OSD settings


Additional info: magna094 root/q - bootstrap node


command ouput:
[root@magna094 ubuntu]# ceph orch apply osd --all-available-devices --unmanaged=true
Scheduled osd.all-available-devices update...
[root@magna094 ubuntu]# ceph orch ls
NAME                       RUNNING  REFRESHED  AGE  PLACEMENT                           IMAGE NAME                                                                                                      IMAGE ID      
alertmanager                   1/1  5m ago     4w   count:1                             docker.io/prom/alertmanager:v0.20.0                                                                             0881eb8f169f  
crash                          9/9  5m ago     4w   *                                   registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
grafana                        1/1  5m ago     4w   count:1                             docker.io/ceph/ceph-grafana:6.6.2                                                                               a0dce381714a  
iscsi.iscsi                    0/2  -          -    magna092;magna093;count:2           <unknown>                                                                                                       <unknown>     
mds.test                       3/3  5m ago     11d  count:3                             registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
mgr                            2/2  5m ago     4w   count:2                             registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
mon                            3/3  5m ago     4w   magna094;magna067;magna073;count:3  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
nfs.ganesha-testnfs            1/1  5m ago     11d  count:1                             registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
node-exporter                  9/9  5m ago     4w   *                                   docker.io/prom/node-exporter:v0.18.1                                                                            e5a616e4b9cf  
osd.None                       9/0  5m ago     -    <unmanaged>                         registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
osd.all-available-devices    17/17  5m ago     4s   <unmanaged>                         registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
prometheus                     1/1  5m ago     4w   count:1                             docker.io/prom/prometheus:v2.18.1                                                                               de242295e225  
rgw.myorg.us-east-1            2/2  5m ago     12d  magna092;magna093;count:2           registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  




[root@magna094 ubuntu]# ceph mgr module disable cephadm
[root@magna094 ubuntu]# ceph mgr module enable cephadm
[root@magna094 ubuntu]# 
[root@magna094 ubuntu]# 
[root@magna094 ubuntu]# ceph orch ls
NAME                       RUNNING  REFRESHED  AGE   PLACEMENT                           IMAGE NAME                                                                                                      IMAGE ID      
alertmanager                   1/1  7m ago     4w    count:1                             docker.io/prom/alertmanager:v0.20.0                                                                             0881eb8f169f  
crash                          9/9  7m ago     4w    *                                   registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
grafana                        1/1  7m ago     4w    count:1                             docker.io/ceph/ceph-grafana:6.6.2                                                                               a0dce381714a  
iscsi.iscsi                    0/2  -          -     magna092;magna093;count:2           <unknown>                                                                                                       <unknown>     
mds.test                       3/3  7m ago     11d   count:3                             registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
mgr                            2/2  7m ago     4w    count:2                             registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
mon                            3/3  7m ago     4w    magna094;magna067;magna073;count:3  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
nfs.ganesha-testnfs            1/1  7m ago     11d   count:1                             registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
node-exporter                  9/9  7m ago     4w    *                                   docker.io/prom/node-exporter:v0.18.1                                                                            e5a616e4b9cf  
osd.None                       9/0  7m ago     -     <unmanaged>                         registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
osd.all-available-devices    17/17  7m ago     118s  *                                   registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
prometheus                     1/1  7m ago     4w    count:1                             docker.io/prom/prometheus:v2.18.1                                                                               de242295e225  
rgw.myorg.us-east-1            2/2  7m ago     12d   magna092;magna093;count:2           registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445  0158d7274861  
[root@magna094 ubuntu]#
after restart cephadm -> unmanaged setting is gone

Comment 1 Juan Miguel Olmo 2021-04-23 07:28:50 UTC
Backport to pacific on-going: https://github.com/ceph/ceph/pull/40922

Comment 2 Sebastian Wagner 2021-06-18 11:51:58 UTC
would be great to have this in z1

Comment 3 Sebastian Wagner 2021-07-06 10:28:18 UTC
pushed to downstream

Comment 8 Preethi 2021-07-09 08:49:52 UTC
@Juan, Verified the BZ with ceph version 16.2.98 and issue is not seen. Hence, moving the BZ state to verified.

                                 
[ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph version
ceph version 16.2.0-98.el8cp (9c6352ff5276f8fb2029981206f3516707220054) pacific (stable)
[ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# 



                                 
[ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph orch apply osd --all-available-devices --unmanaged=true
Scheduled osd.all-available-devices update...
[ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph orch ls
NAME                       RUNNING  REFRESHED  AGE  PLACEMENT                                                                                                                                                                                       
alertmanager                   2/2  3m ago     10d  count:2;label:alertmanager                                                                                                                                                                      
crash                          3/3  3m ago     10d  *                                                                                                                                                                                               
grafana                        0/1  -          10d  ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e                                                                                                                                 
mds.cephfs                     2/2  3m ago     10d  label:mds                                                                                                                                                                                       
mgr                            2/2  3m ago     10d  label:mgr                                                                                                                                                                                       
mon                            3/3  3m ago     10d  ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e;ceph-threetest-1624873245298-node2-osd-mon-mgr-mds-node-exporte;ceph-threetest-1624873245298-node3-mon-osd-node-exporter-crash  
node-exporter                  3/3  3m ago     10d  *                                                                                                                                                                                               
osd.all-available-devices    12/15  3m ago     7s   <unmanaged>                                                                                                                                                                                     
prometheus                     1/1  3m ago     10d  ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e;count:1                                                                                                                         
rgw.myrgw                      2/2  3m ago     10d  ceph-threetest-1624873245298-node2-osd-mon-mgr-mds-node-exporte;ceph-threetest-1624873245298-node3-mon-osd-node-exporter-crash                                                                  
[ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph mgr module disable cephadm
[ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph mgr module enable cephadm
[ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]# ceph orch ls
NAME                       RUNNING  REFRESHED  AGE  PLACEMENT                                                                                                                                                                                       
alertmanager                   2/2  3m ago     10d  count:2;label:alertmanager                                                                                                                                                                      
crash                          3/3  4m ago     10d  *                                                                                                                                                                                               
grafana                        0/1  -          10d  ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e                                                                                                                                 
mds.cephfs                     2/2  4m ago     10d  label:mds                                                                                                                                                                                       
mgr                            2/2  3m ago     10d  label:mgr                                                                                                                                                                                       
mon                            3/3  4m ago     10d  ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e;ceph-threetest-1624873245298-node2-osd-mon-mgr-mds-node-exporte;ceph-threetest-1624873245298-node3-mon-osd-node-exporter-crash  
node-exporter                  3/3  4m ago     10d  *                                                                                                                                                                                               
osd.all-available-devices    12/15  4m ago     45s  <unmanaged>                                                                                                                                                                                     
prometheus                     1/1  3m ago     10d  ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e;count:1                                                                                                                         
rgw.myrgw                      2/2  4m ago     10d  ceph-threetest-1624873245298-node2-osd-mon-mgr-mds-node-exporte;ceph-threetest-1624873245298-node3-mon-osd-node-exporter-crash                                                                  
[ceph: root@ceph-threetest-1624873245298-node1-installer-mon-mgr-osd-node-e /]#

Comment 10 errata-xmlrpc 2021-08-30 08:27:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.