Bug 2144472

Summary: [RHCS 7.0] The command `ceph mds metadata` doesn't list information for the active MDS server
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: nravinas
Component: CephFSAssignee: Patrick Donnelly <pdonnell>
Status: CLOSED ERRATA QA Contact: Amarnath <amk>
Severity: low Docs Contact: Akash Raj <akraj>
Priority: medium    
Version: 5.2CC: akraj, ceph-eng-bugs, cephqe-warriors, gfarnum, hyelloji, lithomas, mchangir, pdonnell, tserlin, vereddy, vshankar
Target Milestone: ---Keywords: Rebase
Target Release: 7.1   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-18.2.1-2.el9cp Doc Type: Bug Fix
Doc Text:
.MDS metadata with FSMap changes are now added in batches to ensure consistency Previously, monitors would sometimes lose track of MDS metadata during upgrades and cancelled PAXOS transactions resulting in MDS metadata being no longer available. With this fix, MDS metadata with FSMap changes are added in batches to ensure consistency. The `ceph mds metadata` command now functions as intended across upgrades.
Story Points: ---
Clone Of:
: 2236385 (view as bug list) Environment:
Last Closed: 2024-06-13 14:19:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2267614, 2272098, 2298578, 2298579    

Description nravinas 2022-11-21 12:00:39 UTC
Description of problem:

The command `ceph mds metadata` doesn't list information for the active MDS server

Comment 12 Venky Shankar 2022-11-30 05:33:05 UTC
Thanks, Natalia. I'll have a look today.

Comment 16 Venky Shankar 2022-11-30 09:07:50 UTC
NI on me.

Comment 17 Venky Shankar 2022-11-30 12:53:01 UTC
https://tracker.ceph.com/issues/24403

Comment 18 Venky Shankar 2022-11-30 12:53:50 UTC
https://tracker.ceph.com/issues/24403#note-10

Comment 19 Venky Shankar 2022-11-30 12:54:12 UTC
https://tracker.ceph.com/issues/24403#note-10

Comment 67 Amarnath 2024-04-01 12:55:28 UTC
Verified with below steps :

A Cluster with 3 mon and 3 mds (one active, other two standby), 6 osd.
step 1. stop two standby mds; 
step 2. restart all mon; (make pending_medata consistent with db)
step 3. start other two mds
step 4. stop leader mon
step 5. run "ceph mds metadata" command to check mds metadata
step 6. stop active mds
step 7. run "ceph mds metadata" command to check mds metadata again


[root@ceph-amk-bz-pbput7-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 3
    },
    "mgr": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 2
    },
    "osd": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 12
    },
    "mds": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 2
    },
    "overall": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 19
    }
}
[root@ceph-amk-bz-pbput7-node7 ~]# 

Detailed steps and CLI command outputs in below document:
https://docs.google.com/document/d/1_UsEeN3Enjjku_walO9fulPnr_EFnNWaES5_QkjIpF0/edit

Comment 68 errata-xmlrpc 2024-06-13 14:19:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925