Bug 2307231 - [CephFS][MDS] MDS rank info mismatch between cli and json output for standby-replay MDS
Summary: [CephFS][MDS] MDS rank info mismatch between cli and json output for standby-...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 8.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 8.0
Assignee: Kotresh HR
QA Contact: sumr
URL:
Whiteboard:
Depends On: 2317213
Blocks: 2317218 2317562
TreeView+ depends on / blocked
 
Reported: 2024-08-22 08:22 UTC by sumr
Modified: 2025-03-26 04:25 UTC (History)
11 users (show)

Fixed In Version: ceph-19.2.0-30.el9cp
Doc Type: Bug Fix
Doc Text:
.JSON output of the `ceph fs status` command now correctly prints the rank field Previously, due to a bug in the JSON output of the `ceph fs status` command, the rank field for standby-replay MDS daemons were incorrect. Instead of the format `{rank}-s`, where {rank} is the active MDS which the standby-replay is following, it displayed a random {rank}. With this fix, the JSON output of `ceph fs status` command correctly prints the rank field for the standby-replay MDS in the format '{rank}-s'.
Clone Of:
: 2317562 (view as bug list)
Environment:
Last Closed: 2024-11-25 09:06:47 UTC
Embargoed:
khiremat: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 67978 0 None None None 2024-09-10 09:00:12 UTC
Red Hat Issue Tracker RHCEPH-9503 0 None None None 2024-08-22 08:24:07 UTC
Red Hat Product Errata RHBA-2024:10216 0 None None None 2024-11-25 09:06:56 UTC

Description sumr 2024-08-22 08:22:06 UTC
Description of problem:
MDS rank-info for standby replay MDS daemon is not matching between CLI and JSON output.

For a Standby Replay MDS deaemon attached to Active MDS with rank-0, rank info under Standby-Replay MDS shows as '0-s' in CLI and as '1' in JSON output, please refer below output for details.

CLI output:
-----------

[root@ceph-sumar-tfa-fix-h3305o-node7 ~]# ceph fs status
cephfs - 0 clients
======
RANK      STATE                            MDS                          ACTIVITY     DNS    INOS   DIRS   CAPS  
 0        active      cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm  Reqs:    0 /s    10     13     12      0   
 1        active      cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz  Reqs:    0 /s    10     13     11      0   
0-s   standby-replay  cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz  Evts:    0 /s     0      3      2      0   
1-s   standby-replay  cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda  Evts:    0 /s     0      0      0      0   
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata   168k  54.9G  
cephfs.cephfs.data    data       0   54.9G  
                 STANDBY MDS                   
cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk  
MDS version: ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc)

JSON output:
------------
[root@ceph-sumar-tfa-fix-h3305o-node7 ~]# ceph fs status --f json

{"clients": [{"clients": 0, "fs": "cephfs"}], "mds_version": [{"daemon": ["cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm", "cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz", "cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz", "cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda", "cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk"], "version": "ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc)"}], "mdsmap": [{"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm", "rank": 0, "rate": 0, "state": "active"}, {"caps": 0, "dirs": 11, "dns": 10, "inos": 13, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz", "rank": 1, "rate": 0, "state": "active"}, {"caps": 5, "dirs": 5, "dns": 5, "events": 0, "inos": 5, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz", "rank": 1, "state": "standby-replay"}, {"caps": 5, "dirs": 5, "dns": 5, "events": 0, "inos": 5, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda", "rank": 1, "state": "standby-replay"}, {"name": "cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk", "state": "standby"}], "pools": [{"avail": 58956873728, "id": 4, "name": "cephfs.cephfs.meta", "type": "metadata", "used": 172032}, {"avail": 58956873728, "id": 5, "name": "cephfs.cephfs.data", "type": "data", "used": 0}]}



Version-Release number of selected component (if applicable):19.1.0-42.el9cp


How reproducible: 5/5


Steps to Reproduce:
1. Configure Standby Replay MDS daemon
2. Run ceph fs status and ceph fs status --f json
3. Compare CLI and JSON output response for rank fields of Standby Replay MDS Daemon

Actual results: JSON Response shows rank as 1 for Standby Replay MDS attached to active MDS with rank 0.


Expected results: JSON response should match CLI output. In CLI, rank info for Standby Replay MDS is seen as '0-s', this information is helpful, as it helps us know that it is attached to active MDS with rank0, Likewise, for standby replay MDS with rank info as '1-s'.

Similar inputs should be available in JSON output too for standby replay MDS info.


Additional info:
Please let me know if additional info required.

Comment 15 errata-xmlrpc 2024-11-25 09:06:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216

Comment 16 Red Hat Bugzilla 2025-03-26 04:25:58 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.