Bug 2225891 - Ceph Fs down flag is not working [NEEDINFO]
Summary: Ceph Fs down flag is not working
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 7.0
Assignee: Patrick Donnelly
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-25 18:00 UTC by Amarnath
Modified: 2023-08-16 15:38 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:
pdonnell: needinfo? (amk)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7073 0 None None None 2023-07-25 18:01:37 UTC

Description Amarnath 2023-07-25 18:00:29 UTC
Description of problem:
The Ceph Fs down flag is not working.
Even after setting it to false FS is not coming back

Test Steps followed
1. Created a filesystem
2. Mounted the file system and filled data
3. set down flag to True. This was making the FS to go down
    ceph fs set cephfs-down-flag down true 
4.Unset the flag.
    ceph fs set cephfs-down-flag down false

Test run Logs : http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-907CTT/

MDS logs : http://magna002.ceph.redhat.com/ceph-qe-logs/amar/ceph-mds.cephfs-down-flag.ceph-amk-recovery-6-1-jiiduj-node2.whnvfw.log

Parallely ran ceph fs status command : 
http://magna002.ceph.redhat.com/ceph-qe-logs/amar/mds_logs.txt


[root@ceph-amk-recovery-6-1-jiiduj-node8 cephfs_kerneljw67otoull_1]# ceph versions
{
    "mon": {
        "ceph version 17.2.6-99.el9cp (6869830013a8878a3930e23c75d8b990f6b0c491) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.6-99.el9cp (6869830013a8878a3930e23c75d8b990f6b0c491) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.6-99.el9cp (6869830013a8878a3930e23c75d8b990f6b0c491) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.6-99.el9cp (6869830013a8878a3930e23c75d8b990f6b0c491) quincy (stable)": 7
    },
    "overall": {
        "ceph version 17.2.6-99.el9cp (6869830013a8878a3930e23c75d8b990f6b0c491) quincy (stable)": 24
    }
}

 


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Patrick Donnelly 2023-07-25 19:52:10 UTC
Please attach the mon logs when running these commands. Please also include `ceph fs dump` before and after these commands are run.

Comment 3 Patrick Donnelly 2023-08-14 22:24:19 UTC
The rank became damaged but the MDS log you attached is from a different time period so I cannot see what happened. Can you reproduce with

debug_mds = 20
debug_ms = 1

and attach the MDS logs please.


Note You need to log in before you can comment on or make changes to this bug.