Bug 1946520 - [CephFS]when you have 2 mounts on same clients, cephfs-top is not displaying the details properly
Summary: [CephFS]when you have 2 mounts on same clients, cephfs-top is not displaying ...
Keywords:
Status: CLOSED DUPLICATE of bug 1946516
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 5.0
Assignee: Jos Collin
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-06 10:32 UTC by Amarnath
Modified: 2021-05-05 05:11 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-05 05:11:01 UTC
Embargoed:


Attachments (Terms of Use)
All details are shown as N/A (80.21 KB, image/png)
2021-04-06 10:32 UTC, Amarnath
no flags Details

Description Amarnath 2021-04-06 10:32:03 UTC
Created attachment 1769527 [details]
All details are shown as N/A

Description of problem:
when u have 2 mounts on same clients, cephfs-top is not displaying the details properly

cephfs-top - Tue Apr  6 05:54:25 2021
Client(s): 2 - 1 FUSE, 0 kclient, 1 libcephfs

    CLIENT_ID  MOUNT_ROOT  CAP_HIT(%)  READ_LATENCY(s)  WRITE_LATENCY(s)  METADATA_LATENCY(s)  DENTRY_LEASE(%)  OPENED_FILES  PINNED_ICAPS  OPENED_INODES  MOUNT_POINT@HOST/ADDR
    34206      /           93.46       0.0              0.0               0.02                 0.0              0             4             0              /mnt/fuse_mount@ceph-amk-1617277523462-node7-client/10.
0.2034489      /           N/A         N/A              N/A               N/A                  N/A              N/A           N/A           N/A            N/A@ceph-amk-1617277523462-node7-client/v1:10.0.208.72




Version-Release number of selected component (if applicable):
ceph version 16.1.0-1323.el8cp


How reproducible:
Fuse and Kernal mount on the same client.
cephfs-top command is listing kernal client as libcephfs client



Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 RHEL Program Management 2021-04-06 10:32:10 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 3 Amarnath 2021-05-04 14:37:26 UTC
Hi Jos,

I am facing problem for getting ceph fs perf stats with latest kernel :4.18.0-305.el8.x86_64

[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ sudo ceph fs status
cephfs - 0 clients
======
+------+--------+-----------------------------------------+---------------+-------+-------+
| Rank | State  |                   MDS                   |    Activity   |  dns  |  inos |
+------+--------+-----------------------------------------+---------------+-------+-------+
|  0   | active | ceph-amar-42-84-1620129830591-node3-mds | Reqs:    0 /s |   10  |   13  |
+------+--------+-----------------------------------------+---------------+-------+-------+
+-----------------+----------+-------+-------+
|       Pool      |   type   |  used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 1536k |  109G |
|   cephfs_data   |   data   |    0  |  109G |
+-----------------+----------+-------+-------+
+------------------------------------------+
|               Standby MDS                |
+------------------------------------------+
| ceph-amar-42-84-1620129830591-node12-mds |
| ceph-amar-42-84-1620129830591-node4-mds  |
| ceph-amar-42-84-1620129830591-node6-mds  |
+------------------------------------------+
MDS version: ceph version 14.2.11-154.el8cp (214891cf4af753adb7301d7180650d79dd6d7550) nautilus (stable)
[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ sudo ceph mgr module enable stats
Error ENOENT: all mgr daemons do not support module 'stats', pass --force to force enablement
[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ sudo ceph mgr module enable stats --force
module 'stats' is already enabled
[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ sudo ceph fs perf stats
no valid command found; 10 closest matches:
fs status {<fs>}
fs volume ls
fs volume create <name>
fs volume rm <vol_name> {<yes-i-really-mean-it>}
fs subvolumegroup ls <vol_name>
fs subvolumegroup create <vol_name> <group_name> {<pool_layout>} {<int>} {<int>} {<mode>}
fs subvolumegroup rm <vol_name> <group_name> {--force}
fs subvolume ls <vol_name> {<group_name>}
fs subvolume create <vol_name> <sub_name> {<int>} {<group_name>} {<pool_layout>} {<int>} {<int>} {<mode>} {--namespace-isolated}
fs subvolume rm <vol_name> <sub_name> {<group_name>} {--force} {--retain-snapshots}
Error EINVAL: invalid command
[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ uname -a
Linux ceph-amar-42-84-1620129830591-node1-mon-mgr-installer 4.18.0-305.el8.x86_64 #1 SMP Thu Apr 29 08:54:30 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux

Comment 4 Amarnath 2021-05-04 14:43:33 UTC
Please find the OS version used

[cephuser@ceph-amar-42-84-1620129830591-node1-mon-mgr-installer ~]$ cat /etc/os-release 
NAME="Red Hat Enterprise Linux"
VERSION="8.4 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.4"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.4 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.4:GA"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.4
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.4"

Comment 5 Jeff Layton 2021-05-04 15:41:20 UTC
None of the stats patches have been merged into the RHEL kernels yet. They are still a WIP upstream at this point.

Comment 6 Jos Collin 2021-05-05 05:11:01 UTC
This is a duplicate of BZ 1946516.

*** This bug has been marked as a duplicate of bug 1946516 ***


Note You need to log in before you can comment on or make changes to this bug.