Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2376021

Summary: ceph orch apply command for MON is not functioning as expected
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Amarnath <amk>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Amarnath <amk>
Severity: high Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 8.1CC: adking, akane, cephqe-warriors, rpollack, tserlin, vdas
Target Milestone: ---Keywords: Regression
Target Release: 8.1z1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.1-232.el9cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2025-08-18 14:01:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Amarnath 2025-07-03 05:46:18 UTC
Description of problem:
Mon is going to stopped state and never comes back

Steps followed:
created cluster with 3 mon and ceph cluster is in healthy state 
[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph -s
  cluster:
    id:     b2112e14-57c9-11f0-96dc-fa163e69bc74
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-amk-recovery-zrsgjn-node1-installer,ceph-amk-recovery-zrsgjn-node2,ceph-amk-recovery-zrsgjn-node3 (age 11m)
    mgr: ceph-amk-recovery-zrsgjn-node1-installer.mlpkgi(active, since 15m), standbys: ceph-amk-recovery-zrsgjn-node2.surulk
    mds: 3/3 daemons up, 2 standby
    osd: 16 osds: 16 up (since 6m), 16 in (since 7m); 1 remapped pgs
 
  data:
    volumes: 2/2 healthy
    pools:   5 pools, 613 pgs
    objects: 59 objects, 453 KiB
    usage:   1.1 GiB used, 239 GiB / 240 GiB avail
    pgs:     0.163% pgs not active
             612 active+clean
             1   peering
 
[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph orch host ls
HOST                                      ADDR         LABELS                    STATUS  
ceph-amk-recovery-zrsgjn-node1-installer  10.0.64.37   _admin,installer,mon,mgr          
ceph-amk-recovery-zrsgjn-node2            10.0.67.12   mon,mgr                           
ceph-amk-recovery-zrsgjn-node3            10.0.67.141  mon,mds                           
ceph-amk-recovery-zrsgjn-node4            10.0.65.132  mds,smb,osd                       
ceph-amk-recovery-zrsgjn-node5            10.0.64.222  mds,smb,osd                       
ceph-amk-recovery-zrsgjn-node6            10.0.64.74   nfs,mds,osd                       
ceph-amk-recovery-zrsgjn-node7            10.0.65.50   osd,mds,nfs                       
7 hosts in cluster
[root@ceph-amk-recovery-zrsgjn-node8 ~]# 
[root@ceph-amk-recovery-zrsgjn-node8 ~]# 
[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph orch ps
NAME                                                 HOST                                      PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION           IMAGE ID      CONTAINER ID  
mds.cephfs.ceph-amk-recovery-zrsgjn-node3.ketpnj     ceph-amk-recovery-zrsgjn-node3                         running (6m)      4m ago   6m    15.1M        -  19.2.1-228.el9cp  166cb4f02f6d  8e8d6df59e47  
mds.cephfs.ceph-amk-recovery-zrsgjn-node4.xxjoxd     ceph-amk-recovery-zrsgjn-node4                         running (6m)      4m ago   6m    17.0M        -  19.2.1-228.el9cp  166cb4f02f6d  327a47180817  
mds.cephfs.ceph-amk-recovery-zrsgjn-node5.pkvrmg     ceph-amk-recovery-zrsgjn-node5                         running (6m)      4m ago   6m    13.1M        -  19.2.1-228.el9cp  166cb4f02f6d  6f6693a57743  
mds.cephfs.ceph-amk-recovery-zrsgjn-node6.gywfgd     ceph-amk-recovery-zrsgjn-node6                         running (6m)      4m ago   6m    16.0M        -  19.2.1-228.el9cp  166cb4f02f6d  e6912bac0c02  
mds.cephfs.ceph-amk-recovery-zrsgjn-node7.bkujnx     ceph-amk-recovery-zrsgjn-node7                         running (6m)      4m ago   6m    14.6M        -  19.2.1-228.el9cp  166cb4f02f6d  614590bf79f7  
mgr.ceph-amk-recovery-zrsgjn-node1-installer.mlpkgi  ceph-amk-recovery-zrsgjn-node1-installer  *:9283,8765  running (16m)     4m ago  16m     520M        -  19.2.1-228.el9cp  166cb4f02f6d  12f1f85ddaf8  
mgr.ceph-amk-recovery-zrsgjn-node2.surulk            ceph-amk-recovery-zrsgjn-node2            *:8443,8765  running (13m)     4m ago  13m     444M        -  19.2.1-228.el9cp  166cb4f02f6d  444597cfb2db  
mon.ceph-amk-recovery-zrsgjn-node1-installer         ceph-amk-recovery-zrsgjn-node1-installer               running (16m)     4m ago  16m    52.3M    2048M  19.2.1-228.el9cp  166cb4f02f6d  4dad54d3104d  
mon.ceph-amk-recovery-zrsgjn-node2                   ceph-amk-recovery-zrsgjn-node2                         running (11m)     4m ago  11m    40.6M    2048M  19.2.1-228.el9cp  166cb4f02f6d  12bd135a5080  
mon.ceph-amk-recovery-zrsgjn-node3                   ceph-amk-recovery-zrsgjn-node3                         running (11m)     4m ago  11m    40.4M    2048M  19.2.1-228.el9cp  166cb4f02f6d  3fdd4fbbf6de  
osd.0                                                ceph-amk-recovery-zrsgjn-node6                         running (7m)      4m ago   7m    73.3M    4096M  19.2.1-228.el9cp  166cb4f02f6d  ea7fa1e193a2  
osd.1                                                ceph-amk-recovery-zrsgjn-node4                         running (7m)      4m ago   7m    62.9M    4096M  19.2.1-228.el9cp  166cb4f02f6d  4ad25469d185  
osd.2                                                ceph-amk-recovery-zrsgjn-node7                         running (6m)      4m ago   6m    67.5M    4096M  19.2.1-228.el9cp  166cb4f02f6d  6ac8266c94a7  
osd.3                                                ceph-amk-recovery-zrsgjn-node5                         running (6m)      4m ago   6m    71.8M    4096M  19.2.1-228.el9cp  166cb4f02f6d  2bea602e282d  
osd.4                                                ceph-amk-recovery-zrsgjn-node4                         running (6m)      4m ago   6m    70.5M    4096M  19.2.1-228.el9cp  166cb4f02f6d  372675066e5f  
osd.5                                                ceph-amk-recovery-zrsgjn-node7                         running (6m)      4m ago   6m    74.3M    4096M  19.2.1-228.el9cp  166cb4f02f6d  248570b303a1  
osd.6                                                ceph-amk-recovery-zrsgjn-node6                         running (6m)      4m ago   6m    71.5M    4096M  19.2.1-228.el9cp  166cb4f02f6d  b18ef95602a1  
osd.7                                                ceph-amk-recovery-zrsgjn-node5                         running (6m)      4m ago   6m    65.9M    4096M  19.2.1-228.el9cp  166cb4f02f6d  b309e987332e  
osd.8                                                ceph-amk-recovery-zrsgjn-node4                         running (6m)      4m ago   6m    67.6M    4096M  19.2.1-228.el9cp  166cb4f02f6d  ea39c9bbe9ea  
osd.9                                                ceph-amk-recovery-zrsgjn-node7                         running (6m)      4m ago   6m    68.4M    4096M  19.2.1-228.el9cp  166cb4f02f6d  7ac69a34d9c0  
osd.10                                               ceph-amk-recovery-zrsgjn-node6                         running (6m)      4m ago   6m    69.2M    4096M  19.2.1-228.el9cp  166cb4f02f6d  98832f163ff0  
osd.11                                               ceph-amk-recovery-zrsgjn-node5                         running (7m)      4m ago   7m    70.3M    4096M  19.2.1-228.el9cp  166cb4f02f6d  4e7b626e660b  
osd.12                                               ceph-amk-recovery-zrsgjn-node4                         running (6m)      4m ago   6m    66.3M    4096M  19.2.1-228.el9cp  166cb4f02f6d  6a3c79feebba  
osd.13                                               ceph-amk-recovery-zrsgjn-node7                         running (7m)      4m ago   7m    68.7M    4096M  19.2.1-228.el9cp  166cb4f02f6d  fc24a9689c62  
osd.14                                               ceph-amk-recovery-zrsgjn-node6                         running (6m)      4m ago   6m    72.7M    4096M  19.2.1-228.el9cp  166cb4f02f6d  daef79bd2730  
osd.15                                               ceph-amk-recovery-zrsgjn-node5                         running (6m)      4m ago   6m    70.9M    4096M  19.2.1-228.el9cp  166cb4f02f6d  c921213ba46e  

2 Apply mon on 2 nodes with --placement
ceph orch --verbose apply mon cephfs --placement='ceph-amk-recovery-zrsgjn-node1-installer ceph-amk-recovery-zrsgjn-node2'
[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph orch ps | grep mon
mon.ceph-amk-recovery-zrsgjn-node1-installer         ceph-amk-recovery-zrsgjn-node1-installer               running (18m)     5m ago  18m    52.3M    2048M  19.2.1-228.el9cp  166cb4f02f6d  4dad54d3104d  
mon.ceph-amk-recovery-zrsgjn-node2                   ceph-amk-recovery-zrsgjn-node2                         running (13m)     5m ago  13m    40.6M    2048M  19.2.1-228.el9cp  166cb4f02f6d  12bd135a5080  
mon.ceph-amk-recovery-zrsgjn-node3                   ceph-amk-recovery-zrsgjn-node3                         stopped           1s ago  13m        -    2048M  <unknown>         <unknown>     <unknown>    


3. Again tried applying with label
[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph orch apply mon label:mon
Scheduled mon update...

[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph orch host ls | grep mon
ceph-amk-recovery-zrsgjn-node1-installer  10.0.64.37   _admin,installer,mon,mgr          
ceph-amk-recovery-zrsgjn-node2            10.0.67.12   mon,mgr                           
ceph-amk-recovery-zrsgjn-node3            10.0.67.141  mon,mds                           
[root@ceph-amk-recovery-zrsgjn-node8 ~]# 

Still mon on node3 is in stopped state

4. We tried with placement as well
[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph orch apply mon cephfs --placement='ceph-amk-recovery-zrsgjn-node1-installer ceph-amk-recovery-zrsgjn-node2 ceph-amk-recovery-zrsgjn-node3'
Scheduled mon update...
[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph orch ps | grep mon
mon.ceph-amk-recovery-zrsgjn-node1-installer         ceph-amk-recovery-zrsgjn-node1-installer               running (44m)   107s ago  44m    78.9M    2048M  19.2.1-228.el9cp  166cb4f02f6d  4dad54d3104d  
mon.ceph-amk-recovery-zrsgjn-node2                   ceph-amk-recovery-zrsgjn-node2                         running (39m)   107s ago  39m    70.2M    2048M  19.2.1-228.el9cp  166cb4f02f6d  12bd135a5080  
mon.ceph-amk-recovery-zrsgjn-node3                   ceph-amk-recovery-zrsgjn-node3                         stopped          44s ago  39m        -    2048M  <unknown>         <unknown>     <unknown>     
[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph orch ps | grep mon
mon.ceph-amk-recovery-zrsgjn-node1-installer         ceph-amk-recovery-zrsgjn-node1-installer               running (44m)   110s ago  44m    78.9M    2048M  19.2.1-228.el9cp  166cb4f02f6d  4dad54d3104d  
mon.ceph-amk-recovery-zrsgjn-node2                   ceph-amk-recovery-zrsgjn-node2                         running (39m)   110s ago  39m    70.2M    2048M  19.2.1-228.el9cp  166cb4f02f6d  12bd135a5080  
mon.ceph-amk-recovery-zrsgjn-node3                   ceph-amk-recovery-zrsgjn-node3                         stopped          47s ago  39m        -    2048M  <unknown>         <unknown>     <unknown>     
[root@ceph-amk-recovery-zrsgjn-node8 ~]# 


Cephadm_mon logs: http://magna002.ceph.redhat.com/ceph-qe-logs/amk/cephadm_mon.log


Version-Release number of selected component (if applicable):
[root@ceph-amk-recovery-zrsgjn-node8 ~]# ceph versions
{
    "mon": {
        "ceph version 19.2.1-228.el9cp (bff49c0b94a24828c46a7f277764d34df9d7d0ab) squid (stable)": 2
    },
    "mgr": {
        "ceph version 19.2.1-228.el9cp (bff49c0b94a24828c46a7f277764d34df9d7d0ab) squid (stable)": 2
    },
    "osd": {
        "ceph version 19.2.1-228.el9cp (bff49c0b94a24828c46a7f277764d34df9d7d0ab) squid (stable)": 16
    },
    "mds": {
        "ceph version 19.2.1-228.el9cp (bff49c0b94a24828c46a7f277764d34df9d7d0ab) squid (stable)": 5
    },
    "overall": {
        "ceph version 19.2.1-228.el9cp (bff49c0b94a24828c46a7f277764d34df9d7d0ab) squid (stable)": 25
    }
}


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Storage PM bot 2025-07-03 05:46:31 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 12 errata-xmlrpc 2025-08-18 14:01:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.1 security and bug fix updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2025:14015