Bug 2317213 - [CephFS][MDS] MDS rank info mismatch between cli and json output for standby-replay MDS [NEEDINFO]
Summary: [CephFS][MDS] MDS rank info mismatch between cli and json output for standby-...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 7.1z3
Assignee: Kotresh HR
QA Contact: sumr
Disha Walvekar
URL:
Whiteboard:
Depends On:
Blocks: 2307231 2317562
TreeView+ depends on / blocked
 
Reported: 2024-10-08 10:14 UTC by Kotresh HR
Modified: 2025-02-24 15:42 UTC (History)
7 users (show)

Fixed In Version: ceph-18.2.1-265.el9cp
Doc Type: Bug Fix
Doc Text:
.JSON output of the `ceph fs status` command now correctly prints the rank field Previously, due to a bug in the JSON output of the `ceph fs status` command, the rank field for standby-replay MDS daemons was incorrect. Instead of the format `{rank}-s`, where {rank} is the active MDS which the standby-replay is following, it displayed a random {rank}. With this fix, the JSON output of `ceph fs status` command correctly prints the rank field for the standby-replay MDS in the format '{rank}-s'.
Clone Of:
Environment:
Last Closed: 2025-02-24 15:41:50 UTC
Embargoed:
khiremat: needinfo? (vshankar)
hyelloji: needinfo+
rpollack: needinfo? (khiremat)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-9944 0 None None None 2024-10-08 10:14:50 UTC
Red Hat Product Errata RHBA-2025:1770 0 None None None 2025-02-24 15:42:03 UTC

Description Kotresh HR 2024-10-08 10:14:12 UTC
This bug was initially created as a copy of Bug #2307231

Description of problem:
MDS rank-info for standby replay MDS daemon is not matching between CLI and JSON output.

For a Standby Replay MDS deaemon attached to Active MDS with rank-0, rank info under Standby-Replay MDS shows as '0-s' in CLI and as '1' in JSON output, please refer below output for details.

CLI output:
-----------

[root@ceph-sumar-tfa-fix-h3305o-node7 ~]# ceph fs status
cephfs - 0 clients
======
RANK      STATE                            MDS                          ACTIVITY     DNS    INOS   DIRS   CAPS  
 0        active      cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm  Reqs:    0 /s    10     13     12      0   
 1        active      cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz  Reqs:    0 /s    10     13     11      0   
0-s   standby-replay  cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz  Evts:    0 /s     0      3      2      0   
1-s   standby-replay  cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda  Evts:    0 /s     0      0      0      0   
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata   168k  54.9G  
cephfs.cephfs.data    data       0   54.9G  
                 STANDBY MDS                   
cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk  
MDS version: ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc)

JSON output:
------------
[root@ceph-sumar-tfa-fix-h3305o-node7 ~]# ceph fs status --f json

{"clients": [{"clients": 0, "fs": "cephfs"}], "mds_version": [{"daemon": ["cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm", "cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz", "cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz", "cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda", "cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk"], "version": "ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc)"}], "mdsmap": [{"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm", "rank": 0, "rate": 0, "state": "active"}, {"caps": 0, "dirs": 11, "dns": 10, "inos": 13, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz", "rank": 1, "rate": 0, "state": "active"}, {"caps": 5, "dirs": 5, "dns": 5, "events": 0, "inos": 5, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz", "rank": 1, "state": "standby-replay"}, {"caps": 5, "dirs": 5, "dns": 5, "events": 0, "inos": 5, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda", "rank": 1, "state": "standby-replay"}, {"name": "cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk", "state": "standby"}], "pools": [{"avail": 58956873728, "id": 4, "name": "cephfs.cephfs.meta", "type": "metadata", "used": 172032}, {"avail": 58956873728, "id": 5, "name": "cephfs.cephfs.data", "type": "data", "used": 0}]}



Version-Release number of selected component (if applicable):19.1.0-42.el9cp


How reproducible: 5/5


Steps to Reproduce:
1. Configure Standby Replay MDS daemon
2. Run ceph fs status and ceph fs status --f json
3. Compare CLI and JSON output response for rank fields of Standby Replay MDS Daemon

Actual results: JSON Response shows rank as 1 for Standby Replay MDS attached to active MDS with rank 0.


Expected results: JSON response should match CLI output. In CLI, rank info for Standby Replay MDS is seen as '0-s', this information is helpful, as it helps us know that it is attached to active MDS with rank0, Likewise, for standby replay MDS with rank info as '1-s'.

Similar inputs should be available in JSON output too for standby replay MDS info.


Additional info:
Please let me know if additional info required.

Comment 1 Storage PM bot 2024-10-08 10:14:23 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 11 errata-xmlrpc 2025-02-24 15:41:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.1 security, bug fix, enhancement, and known issue updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2025:1770


Note You need to log in before you can comment on or make changes to this bug.