Bug 2272458 - cephfs-top: include the missing fields in --dump output
Summary: cephfs-top: include the missing fields in --dump output
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 7.0
Hardware: All
OS: All
medium
medium
Target Milestone: ---
: 7.0z2
Assignee: Jos Collin
QA Contact: Amarnath
Disha Walvekar
URL:
Whiteboard:
Depends On:
Blocks: 2270485
TreeView+ depends on / blocked
 
Reported: 2024-04-01 11:20 UTC by Jos Collin
Modified: 2024-05-09 17:09 UTC (History)
8 users (show)

Fixed In Version: ceph-18.2.0-186.el9cp
Doc Type: Bug Fix
Doc Text:
Previously, the date, client_count and filters fields were missing from the `cephfs-top --dump` output. With this release, the missing fields date, client_count, and filters are added to the `cephfs-top --dump` output.
Clone Of:
Environment:
Last Closed: 2024-05-07 12:11:34 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-8704 0 None None None 2024-04-01 11:27:20 UTC
Red Hat Product Errata RHBA-2024:2743 0 None None None 2024-05-07 12:11:39 UTC

Description Jos Collin 2024-04-01 11:20:06 UTC
This bug was initially created as a copy of Bug #2184991

I am copying this bug because: 



Description of problem:
It would be nice if cephfs-top dumps it's computed values to stdout in json format. The json should contain all the fields & corresponding values displayed on the cephfs-top screen for each client, so currently there are 20 fields for a client. When cephfs-top is run with this new option, the ncurses mode would be disabled. This would be really useful if some other module want to access these values exported from cephfs-top, especially for writing qa tests.

Comment 5 Amarnath 2024-04-10 02:16:07 UTC
Hi All,

[root@ceph-amk-weekly-wzejtc-node8 ~]# dnf install cephfs-top
Updating Subscription Management repositories.
Last metadata expiration check: 0:16:09 ago on Tue Apr  9 00:40:22 2024.
Dependencies resolved.
======================================================================================================================================================================================================================================
 Package                                               Architecture                                      Version                                                          Repository                                             Size
======================================================================================================================================================================================================================================
Installing:
 cephfs-top                                            noarch                                            2:18.2.0-188.el9cp                                               ceph-Tools                                             73 k

Transaction Summary
======================================================================================================================================================================================================================================
Install  1 Package

Total download size: 73 k
Installed size: 55 k
Is this ok [y/N]: y
Downloading Packages:
cephfs-top-18.2.0-188.el9cp.noarch.rpm                                                                                                                                                                871 kB/s |  73 kB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                 860 kB/s |  73 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                              1/1 
  Installing       : cephfs-top-2:18.2.0-188.el9cp.noarch                                                                                                                                                                         1/1 
  Running scriptlet: cephfs-top-2:18.2.0-188.el9cp.noarch                                                                                                                                                                         1/1 
  Verifying        : cephfs-top-2:18.2.0-188.el9cp.noarch                                                                                                                                                                         1/1 
Installed products updated.

Installed:
  cephfs-top-2:18.2.0-188.el9cp.noarch                                                                                                                                                                                                

Complete!
[root@ceph-amk-weekly-wzejtc-node8 ~]# ceph mgr module enable stats
[root@ceph-amk-weekly-wzejtc-node8 ~]# ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r' > /etc/ceph/ceph.client.fstop.keyring
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top

[1]+  Stopped                 cephfs-top
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dump
{"date": "Tue Apr  9 00:57:48 2024", "client_count": {"total_clients": 5, "fuse": 2, "kclient": 1, "libcephfs": 2}, "filesystems": {"cephfs": {"15180": {"mount_root": "/", "chit": 92.31, "dlease": 50.0, "ofiles": "0", "oicaps": "4", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 3.7, "mlatsd": 6.15, "mount_point@host/addr": "/mnt/fuse@ceph-amk-weekly-wzejtc-node8/10.0.208.110"}, "15225": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 1.32, "mlatsd": 1.46, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}, "24403": {"mount_root": "/", "chit": 99.47, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 5000.0, "waio": 4.0, "wsp": 134432820.51, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 301.01, "wlatsd": 87.35, "mlatavg": 1.84, "mlatsd": 4.32, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node8/v1:10.0.208.110"}, "24427": {"mount_root": "/", "chit": 71.43, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 0.88, "mlatsd": 1.54, "mount_point@host/addr": "/mnt/cephfs_fusenlx2l@ceph-amk-weekly-wzejtc-node9/10.0.211.141"}, "24793": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 2.16, "mlatsd": 1.68, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}}}}
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs 
usage: cephfs-top [-h] [--cluster [CLUSTER]] [--id [ID]] [--conffile [CONFFILE]] [--selftest] [-d DELAY] [--dump] [--dumpfs DUMPFS]
cephfs-top: error: argument --dumpfs: expected one argument
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs cephfs
{"cephfs": {"15180": {"mount_root": "/", "chit": 92.31, "dlease": 50.0, "ofiles": "0", "oicaps": "4", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 3.7, "mlatsd": 6.15, "mount_point@host/addr": "/mnt/fuse@ceph-amk-weekly-wzejtc-node8/10.0.208.110"}, "15225": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 1.32, "mlatsd": 1.46, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}, "24403": {"mount_root": "/", "chit": 99.47, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 5000.0, "waio": 4.0, "wsp": 121222658.96, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 301.01, "wlatsd": 87.35, "mlatavg": 1.84, "mlatsd": 4.32, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node8/v1:10.0.208.110"}, "24427": {"mount_root": "/", "chit": 71.43, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 0.88, "mlatsd": 1.54, "mount_point@host/addr": "/mnt/cephfs_fusenlx2l@ceph-amk-weekly-wzejtc-node9/10.0.211.141"}, "24793": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 2.16, "mlatsd": 1.68, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}}}
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs cephfs_1
Filesystem cephfs_1 not available
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --help
usage: cephfs-top [-h] [--cluster [CLUSTER]] [--id [ID]] [--conffile [CONFFILE]] [--selftest] [-d DELAY] [--dump] [--dumpfs DUMPFS]

Ceph Filesystem top utility

optional arguments:
  -h, --help            show this help message and exit	
  --cluster [CLUSTER]   Ceph cluster to connect (default: ceph)
  --id [ID]             Ceph user to use to connection (default: fstop)
  --conffile [CONFFILE]
                        Path to cluster configuration file
  --selftest            Run in selftest mode
  -d DELAY, --delay DELAY
                        Refresh interval in seconds (default: 1, range: 1 - 25)
  --dump                Dump the metrics to stdout
  --dumpfs DUMPFS       Dump the metrics of the given fs to stdout
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs cephfs_1
Filesystem cephfs_1 not available

Do we need to add any more tests to this ?

Regards,
Amarnath

Comment 6 Jos Collin 2024-04-10 04:06:05 UTC
What do you mean by 'add any more tests to this'? 
The above outputs are correct. From the `cephfs-top --dump` output above, it's clear that you don't have the filesystem 'cephfs_1'.

Comment 7 Amarnath 2024-04-16 05:40:53 UTC
Hi Jos,

Could please add doc text for the BZ

Regards,
Amarnath

Comment 10 errata-xmlrpc 2024-05-07 12:11:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:2743


Note You need to log in before you can comment on or make changes to this bug.