Bug 2272458
| Summary: | cephfs-top: include the missing fields in --dump output | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Jos Collin <jcollin> |
| Component: | CephFS | Assignee: | Jos Collin <jcollin> |
| Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
| Severity: | medium | Docs Contact: | Disha Walvekar <dwalveka> |
| Priority: | medium | ||
| Version: | 7.0 | CC: | ceph-eng-bugs, cephqe-warriors, dwalveka, hyelloji, rpollack, tserlin, vereddy, vshankar |
| Target Milestone: | --- | ||
| Target Release: | 7.0z2 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-18.2.0-186.el9cp | Doc Type: | Bug Fix |
| Doc Text: |
Previously, the date, client_count and filters fields were missing from the `cephfs-top --dump` output.
With this release, the missing fields date, client_count, and filters are added to the `cephfs-top --dump` output.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2024-05-07 12:11:34 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2270485 | ||
|
Description
Jos Collin
2024-04-01 11:20:06 UTC
Hi All,
[root@ceph-amk-weekly-wzejtc-node8 ~]# dnf install cephfs-top
Updating Subscription Management repositories.
Last metadata expiration check: 0:16:09 ago on Tue Apr 9 00:40:22 2024.
Dependencies resolved.
======================================================================================================================================================================================================================================
Package Architecture Version Repository Size
======================================================================================================================================================================================================================================
Installing:
cephfs-top noarch 2:18.2.0-188.el9cp ceph-Tools 73 k
Transaction Summary
======================================================================================================================================================================================================================================
Install 1 Package
Total download size: 73 k
Installed size: 55 k
Is this ok [y/N]: y
Downloading Packages:
cephfs-top-18.2.0-188.el9cp.noarch.rpm 871 kB/s | 73 kB 00:00
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 860 kB/s | 73 kB 00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : cephfs-top-2:18.2.0-188.el9cp.noarch 1/1
Running scriptlet: cephfs-top-2:18.2.0-188.el9cp.noarch 1/1
Verifying : cephfs-top-2:18.2.0-188.el9cp.noarch 1/1
Installed products updated.
Installed:
cephfs-top-2:18.2.0-188.el9cp.noarch
Complete!
[root@ceph-amk-weekly-wzejtc-node8 ~]# ceph mgr module enable stats
[root@ceph-amk-weekly-wzejtc-node8 ~]# ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r' > /etc/ceph/ceph.client.fstop.keyring
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top
[1]+ Stopped cephfs-top
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dump
{"date": "Tue Apr 9 00:57:48 2024", "client_count": {"total_clients": 5, "fuse": 2, "kclient": 1, "libcephfs": 2}, "filesystems": {"cephfs": {"15180": {"mount_root": "/", "chit": 92.31, "dlease": 50.0, "ofiles": "0", "oicaps": "4", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 3.7, "mlatsd": 6.15, "mount_point@host/addr": "/mnt/fuse@ceph-amk-weekly-wzejtc-node8/10.0.208.110"}, "15225": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 1.32, "mlatsd": 1.46, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}, "24403": {"mount_root": "/", "chit": 99.47, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 5000.0, "waio": 4.0, "wsp": 134432820.51, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 301.01, "wlatsd": 87.35, "mlatavg": 1.84, "mlatsd": 4.32, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node8/v1:10.0.208.110"}, "24427": {"mount_root": "/", "chit": 71.43, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 0.88, "mlatsd": 1.54, "mount_point@host/addr": "/mnt/cephfs_fusenlx2l@ceph-amk-weekly-wzejtc-node9/10.0.211.141"}, "24793": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 2.16, "mlatsd": 1.68, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}}}}
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs
usage: cephfs-top [-h] [--cluster [CLUSTER]] [--id [ID]] [--conffile [CONFFILE]] [--selftest] [-d DELAY] [--dump] [--dumpfs DUMPFS]
cephfs-top: error: argument --dumpfs: expected one argument
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs cephfs
{"cephfs": {"15180": {"mount_root": "/", "chit": 92.31, "dlease": 50.0, "ofiles": "0", "oicaps": "4", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 3.7, "mlatsd": 6.15, "mount_point@host/addr": "/mnt/fuse@ceph-amk-weekly-wzejtc-node8/10.0.208.110"}, "15225": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 1.32, "mlatsd": 1.46, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}, "24403": {"mount_root": "/", "chit": 99.47, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 5000.0, "waio": 4.0, "wsp": 121222658.96, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 301.01, "wlatsd": 87.35, "mlatavg": 1.84, "mlatsd": 4.32, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node8/v1:10.0.208.110"}, "24427": {"mount_root": "/", "chit": 71.43, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 0.88, "mlatsd": 1.54, "mount_point@host/addr": "/mnt/cephfs_fusenlx2l@ceph-amk-weekly-wzejtc-node9/10.0.211.141"}, "24793": {"mount_root": "/", "chit": 100.0, "dlease": 0.0, "ofiles": "0", "oicaps": "1", "oinodes": "0", "rtio": 0.0, "raio": 0.0, "rsp": 0.0, "wtio": 0.0, "waio": 0.0, "wsp": 0.0, "rlatavg": 0.0, "rlatsd": 0.0, "wlatavg": 0.0, "wlatsd": 0.0, "mlatavg": 2.16, "mlatsd": 1.68, "mount_point@host/addr": "N/A@ceph-amk-weekly-wzejtc-node1-installer/10.0.210.48"}}}
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs cephfs_1
Filesystem cephfs_1 not available
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --help
usage: cephfs-top [-h] [--cluster [CLUSTER]] [--id [ID]] [--conffile [CONFFILE]] [--selftest] [-d DELAY] [--dump] [--dumpfs DUMPFS]
Ceph Filesystem top utility
optional arguments:
-h, --help show this help message and exit
--cluster [CLUSTER] Ceph cluster to connect (default: ceph)
--id [ID] Ceph user to use to connection (default: fstop)
--conffile [CONFFILE]
Path to cluster configuration file
--selftest Run in selftest mode
-d DELAY, --delay DELAY
Refresh interval in seconds (default: 1, range: 1 - 25)
--dump Dump the metrics to stdout
--dumpfs DUMPFS Dump the metrics of the given fs to stdout
[root@ceph-amk-weekly-wzejtc-node8 ~]# cephfs-top --dumpfs cephfs_1
Filesystem cephfs_1 not available
Do we need to add any more tests to this ?
Regards,
Amarnath
What do you mean by 'add any more tests to this'? The above outputs are correct. From the `cephfs-top --dump` output above, it's clear that you don't have the filesystem 'cephfs_1'. Hi Jos, Could please add doc text for the BZ Regards, Amarnath Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:2743 |