Bug 2269686 - mds: add a command to dump directory information
Summary: mds: add a command to dump directory information
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: 7.0z2
Assignee: Jos Collin
QA Contact: Amarnath
Disha Walvekar
URL:
Whiteboard:
Depends On:
Blocks: 2270485
TreeView+ depends on / blocked
 
Reported: 2024-03-15 10:55 UTC by Jos Collin
Modified: 2024-05-07 12:09 UTC (History)
8 users (show)

Fixed In Version: ceph-18.2.0-187.el9cp
Doc Type: Enhancement
Doc Text:
This introduces a command 'dump dir' to dump the directory information.
Clone Of:
Environment:
Last Closed: 2024-05-07 12:09:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 63093 0 None None None 2024-03-15 10:55:20 UTC
Github ceph ceph pull 49756 0 None Merged mds: add a command to dump directory information 2024-03-15 10:55:20 UTC
Red Hat Issue Tracker RHCEPH-8539 0 None None None 2024-03-15 10:56:25 UTC
Red Hat Product Errata RHBA-2024:2743 0 None None None 2024-05-07 12:09:29 UTC

Description Jos Collin 2024-03-15 10:55:21 UTC
Description of problem:
add a command to dump directory information

Version-Release number of selected component (if applicable):
6.0

How reproducible:
$ ceph daemon mds.a dump dir /test-dir


Expected results:
[
    {
        "value/bits": "0/0",
        "status": "dirfrag not in cache"
    }
]

Additional info:

Comment 1 Jos Collin 2024-04-05 11:11:49 UTC
This is not required for 6.1, changing the target to 7.0z2.

Comment 6 Amarnath 2024-04-10 02:14:42 UTC
Hi All,

Setup details : 

[root@ceph-amk-weekly-wzejtc-node8 ~]# ceph -s
  cluster:
    id:     cab92b0e-f624-11ee-9418-fa163ecee7e5
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-amk-weekly-wzejtc-node1-installer,ceph-amk-weekly-wzejtc-node3,ceph-amk-weekly-wzejtc-node2 (age 46m)
    mgr: ceph-amk-weekly-wzejtc-node1-installer.mxsbyy(active, since 48m), standbys: ceph-amk-weekly-wzejtc-node2.kknief
    mds: 2/2 daemons up, 3 standby
    osd: 12 osds: 12 up (since 41m), 12 in (since 42m)
 
  data:
    volumes: 1/1 healthy
    pools:   3 pools, 49 pgs
    objects: 103 objects, 8.3 MiB
    usage:   725 MiB used, 179 GiB / 180 GiB avail
    pgs:     49 active+clean
 
[root@ceph-amk-weekly-wzejtc-node8 ~]# ceph versions
{
    "mon": {
        "ceph version 18.2.0-188.el9cp (639245962af16e2c342418f2318f86f7d1e34e24) reef (stable)": 3
    },
    "mgr": {
        "ceph version 18.2.0-188.el9cp (639245962af16e2c342418f2318f86f7d1e34e24) reef (stable)": 2
    },
    "osd": {
        "ceph version 18.2.0-188.el9cp (639245962af16e2c342418f2318f86f7d1e34e24) reef (stable)": 12
    },
    "mds": {
        "ceph version 18.2.0-188.el9cp (639245962af16e2c342418f2318f86f7d1e34e24) reef (stable)": 5
    },
    "overall": {
        "ceph version 18.2.0-188.el9cp (639245962af16e2c342418f2318f86f7d1e34e24) reef (stable)": 22
    }
}
[root@ceph-amk-weekly-wzejtc-node8 ~]# ceph fs status
cephfs - 2 clients
======
RANK  STATE                      MDS                         ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  cephfs.ceph-amk-weekly-wzejtc-node3.zumhmb  Reqs:    0 /s    86     21     20      2   
 1    active  cephfs.ceph-amk-weekly-wzejtc-node4.rcdikx  Reqs:    0 /s    27     20     18      0   
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata  24.0M  56.6G  
cephfs.cephfs.data    data       0   56.6G  
               STANDBY MDS                  
cephfs.ceph-amk-weekly-wzejtc-node5.ckfdfo  
cephfs.ceph-amk-weekly-wzejtc-node7.qnmjub  
cephfs.ceph-amk-weekly-wzejtc-node6.zpjhbt  
MDS version: ceph version 18.2.0-188.el9cp (639245962af16e2c342418f2318f86f7d1e34e24) reef (stable)


Command output:
[root@ceph-amk-weekly-wzejtc-node8 ~]# ceph orch host ls
HOST                                    ADDR          LABELS                    STATUS  
ceph-amk-weekly-wzejtc-node1-installer  10.0.210.48   _admin,mgr,mon,installer          
ceph-amk-weekly-wzejtc-node2            10.0.208.35   mgr,mon                           
ceph-amk-weekly-wzejtc-node3            10.0.208.162  mon,mds                           
ceph-amk-weekly-wzejtc-node4            10.0.208.179  mds,osd                           
ceph-amk-weekly-wzejtc-node5            10.0.208.161  mds,osd                           
ceph-amk-weekly-wzejtc-node6            10.0.210.4    nfs,mds,osd                       
ceph-amk-weekly-wzejtc-node7            10.0.209.11   nfs,mds      

[root@ceph-amk-weekly-wzejtc-node8 ~]# ceph-fuse /mnt/fuse/
2024-04-09T00:51:35.795-0400 7f1822438480 -1 init, newargv = 0x7f1810004df0 newargc=15
ceph-fuse[9898]: starting ceph client
ceph-fuse[9898]: starting fuse
[root@ceph-amk-weekly-wzejtc-node8 ~]# mkdir /mnt/fuse/test_dir
[root@ceph-amk-weekly-wzejtc-node8 ~]# mkdir /mnt/fuse/test_dir/test
[root@ceph-amk-weekly-wzejtc-node8 ~]# 



[ceph: root@ceph-amk-weekly-wzejtc-node3 /]# ceph daemon mds.cephfs.ceph-amk-weekly-wzejtc-node3.zumhmb dump dir /
[
    {
        "path": "",
        "dirfrag": "0x1",
        "snapid_first": 2,
        "projected_version": "1920",
        "version": "1920",
        "committing_version": "0",
        "committed_version": "0",
        "is_rep": false,
        "dir_auth": "0",
        "states": [
            "auth",
            "dirty",
            "complete"
        ],
        "is_auth": true,
        "auth_state": {
            "replicas": {}
        },
        "replica_state": {
            "authority": [
                0,
                -2
            ],
            "replica_nonce": 0
        },
        "auth_pins": 0,
        "is_frozen": false,
        "is_freezing": false,
        "pins": {
            "child": 1,
            "subtree": 1,
            "subtreetemp": 0,
            "replicated": 0,
            "dirty": 1,
            "waiter": 0,
            "authpin": 0
        },
        "nref": 3
    }
]
[ceph: root@ceph-amk-weekly-wzejtc-node3 /]# ceph daemon mds.cephfs.ceph-amk-weekly-wzejtc-node3.zumhmb dump dir /test_dir
[
    {
        "path": "/test_dir",
        "dirfrag": "0x100000007e0",
        "snapid_first": 2,
        "projected_version": "1",
        "version": "1",
        "committing_version": "0",
        "committed_version": "0",
        "is_rep": false,
        "dir_auth": "",
        "states": [
            "auth",
            "dirty",
            "complete"
        ],
        "is_auth": true,
        "auth_state": {
            "replicas": {}
        },
        "replica_state": {
            "authority": [
                0,
                -2
            ],
            "replica_nonce": 0
        },
        "auth_pins": 0,
        "is_frozen": false,
        "is_freezing": false,
        "pins": {
            "dirty": 1
        },
        "nref": 1
    }
]
[ceph: root@ceph-amk-weekly-wzejtc-node3 /]# ceph daemon mds.cephfs.ceph-amk-weekly-wzejtc-node3.zumhmb dump dir /test_dir_1
[ceph: root@ceph-amk-weekly-wzejtc-node3 /]# ceph daemon mds.cephfs.ceph-amk-weekly-wzejtc-node3.zumhmb dump dir /test_dir
[
    {
        "path": "/test_dir",
        "dirfrag": "0x100000007e0",
        "snapid_first": 2,
        "projected_version": "3",
        "version": "3",
        "committing_version": "0",
        "committed_version": "0",
        "is_rep": false,
        "dir_auth": "",
        "states": [
            "auth",
            "dirty",
            "complete"
        ],
        "is_auth": true,
        "auth_state": {
            "replicas": {}
        },
        "replica_state": {
            "authority": [
                0,
                -2
            ],
            "replica_nonce": 0
        },
        "auth_pins": 0,
        "is_frozen": false,
        "is_freezing": false,
        "pins": {
            "child": 1,
            "dirty": 1,
            "authpin": 0
        },
        "nref": 2
    }
]
[ceph: root@ceph-amk-weekly-wzejtc-node3 /]# ceph daemon mds.cephfs.ceph-amk-weekly-wzejtc-node3.zumhmb dump dir /test_dir/test
[
    {
        "path": "/test_dir/test",
        "dirfrag": "0x100000007e1",
        "snapid_first": 2,
        "projected_version": "1",
        "version": "1",
        "committing_version": "0",
        "committed_version": "0",
        "is_rep": false,
        "dir_auth": "",
        "states": [
            "auth",
            "dirty",
            "complete"
        ],
        "is_auth": true,
        "auth_state": {
            "replicas": {}
        },
        "replica_state": {
            "authority": [
                0,
                -2
            ],
            "replica_nonce": 0
        },
        "auth_pins": 0,
        "is_frozen": false,
        "is_freezing": false,
        "pins": {
            "dirty": 1
        },
        "nref": 1
    }
]
[ceph: root@ceph-amk-weekly-wzejtc-node3 /]# 

Do we need to verify anything apart from this ?

Regards,
Amarnath

Comment 7 Jos Collin 2024-04-10 04:37:38 UTC
The outputs are correct.

Comment 8 Amarnath 2024-04-16 05:41:40 UTC
Hi Jos,

Could you please add doc text for the BZ

Regards,
Amarnath

Comment 12 errata-xmlrpc 2024-05-07 12:09:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:2743


Note You need to log in before you can comment on or make changes to this bug.