Description of problem: MDS rank-info for standby replay MDS daemon is not matching between CLI and JSON output. For a Standby Replay MDS deaemon attached to Active MDS with rank-0, rank info under Standby-Replay MDS shows as '0-s' in CLI and as '1' in JSON output, please refer below output for details. CLI output: ----------- [root@ceph-sumar-tfa-fix-h3305o-node7 ~]# ceph fs status cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm Reqs: 0 /s 10 13 12 0 1 active cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz Reqs: 0 /s 10 13 11 0 0-s standby-replay cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz Evts: 0 /s 0 3 2 0 1-s standby-replay cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda Evts: 0 /s 0 0 0 0 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 168k 54.9G cephfs.cephfs.data data 0 54.9G STANDBY MDS cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk MDS version: ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc) JSON output: ------------ [root@ceph-sumar-tfa-fix-h3305o-node7 ~]# ceph fs status --f json {"clients": [{"clients": 0, "fs": "cephfs"}], "mds_version": [{"daemon": ["cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm", "cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz", "cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz", "cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda", "cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk"], "version": "ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc)"}], "mdsmap": [{"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm", "rank": 0, "rate": 0, "state": "active"}, {"caps": 0, "dirs": 11, "dns": 10, "inos": 13, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz", "rank": 1, "rate": 0, "state": "active"}, {"caps": 5, "dirs": 5, "dns": 5, "events": 0, "inos": 5, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz", "rank": 1, "state": "standby-replay"}, {"caps": 5, "dirs": 5, "dns": 5, "events": 0, "inos": 5, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda", "rank": 1, "state": "standby-replay"}, {"name": "cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk", "state": "standby"}], "pools": [{"avail": 58956873728, "id": 4, "name": "cephfs.cephfs.meta", "type": "metadata", "used": 172032}, {"avail": 58956873728, "id": 5, "name": "cephfs.cephfs.data", "type": "data", "used": 0}]} Version-Release number of selected component (if applicable):19.1.0-42.el9cp How reproducible: 5/5 Steps to Reproduce: 1. Configure Standby Replay MDS daemon 2. Run ceph fs status and ceph fs status --f json 3. Compare CLI and JSON output response for rank fields of Standby Replay MDS Daemon Actual results: JSON Response shows rank as 1 for Standby Replay MDS attached to active MDS with rank 0. Expected results: JSON response should match CLI output. In CLI, rank info for Standby Replay MDS is seen as '0-s', this information is helpful, as it helps us know that it is attached to active MDS with rank0, Likewise, for standby replay MDS with rank info as '1-s'. Similar inputs should be available in JSON output too for standby replay MDS info. Additional info: Please let me know if additional info required.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:10216
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days