Bug 2307231

Summary: [CephFS][MDS] MDS rank info mismatch between cli and json output for standby-replay MDS
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: sumr
Component: CephFSAssignee: Kotresh HR <khiremat>
Status: CLOSED ERRATA QA Contact: sumr
Severity: medium Docs Contact:
Priority: unspecified    
Version: 8.0CC: akraj, amk, ceph-eng-bugs, cephqe-warriors, gfarnum, hyelloji, khiremat, ngangadh, rpollack, tserlin, vshankar
Target Milestone: ---Flags: khiremat: needinfo-
Target Release: 8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-19.2.0-30.el9cp Doc Type: Bug Fix
Doc Text:
.JSON output of the `ceph fs status` command now correctly prints the rank field Previously, due to a bug in the JSON output of the `ceph fs status` command, the rank field for standby-replay MDS daemons were incorrect. Instead of the format `{rank}-s`, where {rank} is the active MDS which the standby-replay is following, it displayed a random {rank}. With this fix, the JSON output of `ceph fs status` command correctly prints the rank field for the standby-replay MDS in the format '{rank}-s'.
Story Points: ---
Clone Of:
: 2317562 (view as bug list) Environment:
Last Closed: 2024-11-25 09:06:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2317213    
Bug Blocks: 2317218, 2317562    

Description sumr 2024-08-22 08:22:06 UTC
Description of problem:
MDS rank-info for standby replay MDS daemon is not matching between CLI and JSON output.

For a Standby Replay MDS deaemon attached to Active MDS with rank-0, rank info under Standby-Replay MDS shows as '0-s' in CLI and as '1' in JSON output, please refer below output for details.

CLI output:
-----------

[root@ceph-sumar-tfa-fix-h3305o-node7 ~]# ceph fs status
cephfs - 0 clients
======
RANK      STATE                            MDS                          ACTIVITY     DNS    INOS   DIRS   CAPS  
 0        active      cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm  Reqs:    0 /s    10     13     12      0   
 1        active      cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz  Reqs:    0 /s    10     13     11      0   
0-s   standby-replay  cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz  Evts:    0 /s     0      3      2      0   
1-s   standby-replay  cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda  Evts:    0 /s     0      0      0      0   
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata   168k  54.9G  
cephfs.cephfs.data    data       0   54.9G  
                 STANDBY MDS                   
cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk  
MDS version: ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc)

JSON output:
------------
[root@ceph-sumar-tfa-fix-h3305o-node7 ~]# ceph fs status --f json

{"clients": [{"clients": 0, "fs": "cephfs"}], "mds_version": [{"daemon": ["cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm", "cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz", "cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz", "cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda", "cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk"], "version": "ceph version 19.1.0-42.el9cp (03ae7f7ffec5e7796d2808064c4766b35c4b5ffb) squid (rc)"}], "mdsmap": [{"caps": 0, "dirs": 12, "dns": 10, "inos": 13, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node4.mpgdkm", "rank": 0, "rate": 0, "state": "active"}, {"caps": 0, "dirs": 11, "dns": 10, "inos": 13, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node5.mjoicz", "rank": 1, "rate": 0, "state": "active"}, {"caps": 5, "dirs": 5, "dns": 5, "events": 0, "inos": 5, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node2.hhizjz", "rank": 1, "state": "standby-replay"}, {"caps": 5, "dirs": 5, "dns": 5, "events": 0, "inos": 5, "name": "cephfs.ceph-sumar-tfa-fix-h3305o-node6.oqyvda", "rank": 1, "state": "standby-replay"}, {"name": "cephfs.ceph-sumar-tfa-fix-h3305o-node3.cllcsk", "state": "standby"}], "pools": [{"avail": 58956873728, "id": 4, "name": "cephfs.cephfs.meta", "type": "metadata", "used": 172032}, {"avail": 58956873728, "id": 5, "name": "cephfs.cephfs.data", "type": "data", "used": 0}]}



Version-Release number of selected component (if applicable):19.1.0-42.el9cp


How reproducible: 5/5


Steps to Reproduce:
1. Configure Standby Replay MDS daemon
2. Run ceph fs status and ceph fs status --f json
3. Compare CLI and JSON output response for rank fields of Standby Replay MDS Daemon

Actual results: JSON Response shows rank as 1 for Standby Replay MDS attached to active MDS with rank 0.


Expected results: JSON response should match CLI output. In CLI, rank info for Standby Replay MDS is seen as '0-s', this information is helpful, as it helps us know that it is attached to active MDS with rank0, Likewise, for standby replay MDS with rank info as '1-s'.

Similar inputs should be available in JSON output too for standby replay MDS info.


Additional info:
Please let me know if additional info required.

Comment 15 errata-xmlrpc 2024-11-25 09:06:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216

Comment 16 Red Hat Bugzilla 2025-03-26 04:25:58 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days