Bug 2144472 - [RHCS 7.0] The command `ceph mds metadata` doesn't list information for the active MDS server
Summary: [RHCS 7.0] The command `ceph mds metadata` doesn't list information for the a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.2
Hardware: All
OS: Linux
medium
low
Target Milestone: ---
: 7.1
Assignee: Patrick Donnelly
QA Contact: Amarnath
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2267614 2272098 2298578 2298579
TreeView+ depends on / blocked
 
Reported: 2022-11-21 12:00 UTC by nravinas
Modified: 2024-07-18 07:59 UTC (History)
11 users (show)

Fixed In Version: ceph-18.2.1-2.el9cp
Doc Type: Bug Fix
Doc Text:
.MDS metadata with FSMap changes are now added in batches to ensure consistency Previously, monitors would sometimes lose track of MDS metadata during upgrades and cancelled PAXOS transactions resulting in MDS metadata being no longer available. With this fix, MDS metadata with FSMap changes are added in batches to ensure consistency. The `ceph mds metadata` command now functions as intended across upgrades.
Clone Of:
: 2236385 (view as bug list)
Environment:
Last Closed: 2024-06-13 14:19:47 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 61693 0 None None None 2024-03-26 14:22:47 UTC
Ceph Project Bug Tracker 63413 0 None None None 2024-03-26 14:22:47 UTC
Red Hat Issue Tracker RHCEPH-5662 0 None None None 2022-11-21 12:06:52 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:20:01 UTC

Description nravinas 2022-11-21 12:00:39 UTC
Description of problem:

The command `ceph mds metadata` doesn't list information for the active MDS server

Comment 12 Venky Shankar 2022-11-30 05:33:05 UTC
Thanks, Natalia. I'll have a look today.

Comment 16 Venky Shankar 2022-11-30 09:07:50 UTC
NI on me.

Comment 17 Venky Shankar 2022-11-30 12:53:01 UTC
https://tracker.ceph.com/issues/24403

Comment 18 Venky Shankar 2022-11-30 12:53:50 UTC
https://tracker.ceph.com/issues/24403#note-10

Comment 19 Venky Shankar 2022-11-30 12:54:12 UTC
https://tracker.ceph.com/issues/24403#note-10

Comment 67 Amarnath 2024-04-01 12:55:28 UTC
Verified with below steps :

A Cluster with 3 mon and 3 mds (one active, other two standby), 6 osd.
step 1. stop two standby mds; 
step 2. restart all mon; (make pending_medata consistent with db)
step 3. start other two mds
step 4. stop leader mon
step 5. run "ceph mds metadata" command to check mds metadata
step 6. stop active mds
step 7. run "ceph mds metadata" command to check mds metadata again


[root@ceph-amk-bz-pbput7-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 3
    },
    "mgr": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 2
    },
    "osd": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 12
    },
    "mds": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 2
    },
    "overall": {
        "ceph version 18.2.1-105.el9cp (492eafbc9b91b19fa81322f2a8def7778d23d73c) reef (stable)": 19
    }
}
[root@ceph-amk-bz-pbput7-node7 ~]# 

Detailed steps and CLI command outputs in below document:
https://docs.google.com/document/d/1_UsEeN3Enjjku_walO9fulPnr_EFnNWaES5_QkjIpF0/edit

Comment 68 errata-xmlrpc 2024-06-13 14:19:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925


Note You need to log in before you can comment on or make changes to this bug.