Bug 1946520
| Summary: | [CephFS]when you have 2 mounts on same clients, cephfs-top is not displaying the details properly | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Amarnath <amk> | ||||
| Component: | CephFS | Assignee: | Jos Collin <jcollin> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | Hemanth Kumar <hyelloji> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 5.0 | CC: | ceph-eng-bugs, jcollin, jlayton, sweil, vereddy | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 5.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2021-05-05 05:11:01 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity. Hi Jos,
I am facing problem for getting ceph fs perf stats with latest kernel :4.18.0-305.el8.x86_64
[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ sudo ceph fs status
cephfs - 0 clients
======
+------+--------+-----------------------------------------+---------------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+--------+-----------------------------------------+---------------+-------+-------+
| 0 | active | ceph-amar-42-84-1620129830591-node3-mds | Reqs: 0 /s | 10 | 13 |
+------+--------+-----------------------------------------+---------------+-------+-------+
+-----------------+----------+-------+-------+
| Pool | type | used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 1536k | 109G |
| cephfs_data | data | 0 | 109G |
+-----------------+----------+-------+-------+
+------------------------------------------+
| Standby MDS |
+------------------------------------------+
| ceph-amar-42-84-1620129830591-node12-mds |
| ceph-amar-42-84-1620129830591-node4-mds |
| ceph-amar-42-84-1620129830591-node6-mds |
+------------------------------------------+
MDS version: ceph version 14.2.11-154.el8cp (214891cf4af753adb7301d7180650d79dd6d7550) nautilus (stable)
[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ sudo ceph mgr module enable stats
Error ENOENT: all mgr daemons do not support module 'stats', pass --force to force enablement
[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ sudo ceph mgr module enable stats --force
module 'stats' is already enabled
[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ sudo ceph fs perf stats
no valid command found; 10 closest matches:
fs status {<fs>}
fs volume ls
fs volume create <name>
fs volume rm <vol_name> {<yes-i-really-mean-it>}
fs subvolumegroup ls <vol_name>
fs subvolumegroup create <vol_name> <group_name> {<pool_layout>} {<int>} {<int>} {<mode>}
fs subvolumegroup rm <vol_name> <group_name> {--force}
fs subvolume ls <vol_name> {<group_name>}
fs subvolume create <vol_name> <sub_name> {<int>} {<group_name>} {<pool_layout>} {<int>} {<int>} {<mode>} {--namespace-isolated}
fs subvolume rm <vol_name> <sub_name> {<group_name>} {--force} {--retain-snapshots}
Error EINVAL: invalid command
[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ uname -a
Linux ceph-amar-42-84-1620129830591-node1-mon-mgr-installer 4.18.0-305.el8.x86_64 #1 SMP Thu Apr 29 08:54:30 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
Please find the OS version used [cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ cat /etc/os-release NAME="Red Hat Enterprise Linux" VERSION="8.4 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.4" PLATFORM_ID="platform:el8" PRETTY_NAME="Red Hat Enterprise Linux 8.4 (Ootpa)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:8.4:GA" HOME_URL="https://www.redhat.com/" DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/" BUG_REPORT_URL="https://bugzilla.redhat.com/" REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.4 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.4" None of the stats patches have been merged into the RHEL kernels yet. They are still a WIP upstream at this point. This is a duplicate of BZ 1946516. *** This bug has been marked as a duplicate of bug 1946516 *** |
Created attachment 1769527 [details] All details are shown as N/A Description of problem: when u have 2 mounts on same clients, cephfs-top is not displaying the details properly cephfs-top - Tue Apr 6 05:54:25 2021 Client(s): 2 - 1 FUSE, 0 kclient, 1 libcephfs CLIENT_ID MOUNT_ROOT CAP_HIT(%) READ_LATENCY(s) WRITE_LATENCY(s) METADATA_LATENCY(s) DENTRY_LEASE(%) OPENED_FILES PINNED_ICAPS OPENED_INODES MOUNT_POINT@HOST/ADDR 34206 / 93.46 0.0 0.0 0.02 0.0 0 4 0 /mnt/fuse_mount@ceph-amk-1617277523462-node7-client/10. 0.2034489 / N/A N/A N/A N/A N/A N/A N/A N/A N/A@ceph-amk-1617277523462-node7-client/v1:10.0.208.72 Version-Release number of selected component (if applicable): ceph version 16.1.0-1323.el8cp How reproducible: Fuse and Kernal mount on the same client. cephfs-top command is listing kernal client as libcephfs client Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: