This bug was initially created as a copy of Bug #2184991 I am copying this bug because: Description of problem: It would be nice if cephfs-top dumps it's computed values to stdout in json format. The json should contain all the fields & corresponding values displayed on the cephfs-top screen for each client, so currently there are 20 fields for a client. When cephfs-top is run with this new option, the ncurses mode would be disabled. This would be really useful if some other module want to access these values exported from cephfs-top, especially for writing qa tests.
Hi All, [root@ceph-amk-weekly-wzejtc-node8 ~]# dnf install cephfs-top Updating Subscription Management repositories. Last metadata expiration check: 0:16:09 ago on Tue Apr 9 00:40:22 2024. Dependencies resolved. ====================================================================================================================================================================================================================================== Package Architecture Version Repository Size ====================================================================================================================================================================================================================================== Installing: cephfs-top noarch 2:18.2.0-188.el9cp ceph-Tools 73 k Transaction Summary ====================================================================================================================================================================================================================================== Install 1 Package Total download size: 73 k Installed size: 55 k Is this ok [y/N]: y Downloading Packages: cephfs-top-18.2.0-188.el9cp.noarch.rpm 871 kB/s | 73 kB 00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 860 kB/s | 73 kB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : cephfs-top-2:18.2.0-188.el9cp.noarch 1/1 Running scriptlet: cephfs-top-2:18.2.0-188.el9cp.noarch 1/1 Verifying : cephfs-top-2:18.2.0-188.el9cp.noarch 1/1 Installed products updated. Installed: cephfs-top-2:18.2.0-188.el9cp.noarch Complete! [root@ceph-amk-weekly-wzejtc-node8 ~]# ceph mgr module enable stats [root@ceph-amk-weekly-wzejtc-node8 ~]# ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r' > /etc/ceph/ceph.client.fstop.keyring [root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top [1]+ Stopped cephfs-top [root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dump {"date": "Tue Apr 9 00:57:48 2024", "client_count": {"total_clients": 5, "fuse": 2, "kclient": 1, "libcephfs": 2}, "filesystems": {"cephfs": {"15180": {"mount_root": "/", "chit": 92.31, "dlease": 50.0, "ofiles": "0", "oicaps": "4", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 3.7, "mlatsd": 6.15, "mount_point@host/addr": "/mnt/fuse@ceph-amk-weekly-wzejtc-node8/10.0.208.110"}, "15225": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 1.32, "mlatsd": 1.46, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}, "24403": {"mount_root": "/", "chit": 99.47, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 5000.0, "waio": 4.0, "wsp": 134432820.51, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 301.01, "wlatsd": 87.35, "mlatavg": 1.84, "mlatsd": 4.32, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node8/v1:10.0.208.110"}, "24427": {"mount_root": "/", "chit": 71.43, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 0.88, "mlatsd": 1.54, "mount_point@host/addr": "/mnt/cephfs_fusenlx2l@ceph-amk-weekly-wzejtc-node9/10.0.211.141"}, "24793": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 2.16, "mlatsd": 1.68, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}}}} [root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs usage: cephfs-top [-h] [--cluster [CLUSTER]] [--id [ID]] [--conffile [CONFFILE]] [--selftest] [-d DELAY] [--dump] [--dumpfs DUMPFS] cephfs-top: error: argument --dumpfs: expected one argument [root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs cephfs {"cephfs": {"15180": {"mount_root": "/", "chit": 92.31, "dlease": 50.0, "ofiles": "0", "oicaps": "4", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 3.7, "mlatsd": 6.15, "mount_point@host/addr": "/mnt/fuse@ceph-amk-weekly-wzejtc-node8/10.0.208.110"}, "15225": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 1.32, "mlatsd": 1.46, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}, "24403": {"mount_root": "/", "chit": 99.47, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 5000.0, "waio": 4.0, "wsp": 121222658.96, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 301.01, "wlatsd": 87.35, "mlatavg": 1.84, "mlatsd": 4.32, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node8/v1:10.0.208.110"}, "24427": {"mount_root": "/", "chit": 71.43, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 0.88, "mlatsd": 1.54, "mount_point@host/addr": "/mnt/cephfs_fusenlx2l@ceph-amk-weekly-wzejtc-node9/10.0.211.141"}, "24793": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 2.16, "mlatsd": 1.68, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}}} [root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs cephfs_1 Filesystem cephfs_1 not available [root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --help usage: cephfs-top [-h] [--cluster [CLUSTER]] [--id [ID]] [--conffile [CONFFILE]] [--selftest] [-d DELAY] [--dump] [--dumpfs DUMPFS] Ceph Filesystem top utility optional arguments: -h, --help show this help message and exit --cluster [CLUSTER] Ceph cluster to connect (default: ceph) --id [ID] Ceph user to use to connection (default: fstop) --conffile [CONFFILE] Path to cluster configuration file --selftest Run in selftest mode -d DELAY, --delay DELAY Refresh interval in seconds (default: 1, range: 1 - 25) --dump Dump the metrics to stdout --dumpfs DUMPFS Dump the metrics of the given fs to stdout [root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs cephfs_1 Filesystem cephfs_1 not available Do we need to add any more tests to this ? Regards, Amarnath
What do you mean by 'add any more tests to this'? The above outputs are correct. From the `cephfs-top --dump` output above, it's clear that you don't have the filesystem 'cephfs_1'.
Hi Jos, Could please add doc text for the BZ Regards, Amarnath
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:2743