Bug 2164077 - Error message is not descriptive for ceph tell command [NEEDINFO]
Summary: Error message is not descriptive for ceph tell command
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 6.1z2
Assignee: Neeraj Pratap Singh
QA Contact: julpark
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-01-24 17:28 UTC by Amarnath
Modified: 2023-07-10 09:04 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:
gfarnum: needinfo? (neesingh)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 59624 0 None None None 2023-05-03 11:38:01 UTC
Red Hat Issue Tracker RHCEPH-6022 0 None None None 2023-01-24 17:28:54 UTC

Description Amarnath 2023-01-24 17:28:40 UTC
Description of problem:
Error message is not descriptive for ceph tell command
ceph tell command with only <mds_name> giving below error

[root@magna028 ~]# ceph tell cephfs_1.magna023.yvyxvl client ls
error handling command target: local variable 'poolid' referenced before assignment

the error message `local variable 'poolid' referenced before assignment` looks not  descriptive. looks like there was a miss in some code path. 

same command with mds.<mds_name> is working fine
[root@magna028 ~]# ceph tell mds.cephfs_1.magna023.yvyxvl client ls | grep "id"
2023-01-24T17:21:42.239+0000 7fcce27f4640  0 client.134465 ms_handle_reset on v2:10.8.128.23:6824/376621306
2023-01-24T17:21:42.381+0000 7fcce27f4640  0 client.78973 ms_handle_reset on v2:10.8.128.23:6824/376621306
        "id": 115809,
            "entity_id": "admin",


[root@magna028 ~]# ceph fs status
cephfs - 6 clients
======
RANK  STATE            MDS              ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  cephfs.argo017.kwzitt  Reqs:   10 /s   153k   153k  1666   1072   
 1    active  cephfs.argo021.aoxhrx  Reqs: 1399 /s  50.4k  50.3k   605   1171   
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata  3528M  51.9T  
cephfs.cephfs.data    data    15.2T  51.9T  
cephfs_1 - 1 clients
========
RANK  STATE             MDS                ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  cephfs_1.magna023.yvyxvl  Reqs:   26 /s   158    159     67    147   
        POOL            TYPE     USED  AVAIL  
cephfs.cephfs_1.meta  metadata  98.8M  51.9T  
cephfs.cephfs_1.data    data    36.0k  51.9T  
      STANDBY MDS        
 cephfs.argo013.lovief   
cephfs_1.argo021.rvojoi  
 cephfs.argo020.nuhgsb   
MDS version: ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)


Version-Release number of selected component (if applicable):
[root@magna028 ~]# ceph versions
{
    "mon": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 5
    },
    "mgr": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 3
    },
    "osd": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 100
    },
    "mds": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 6
    },
    "overall": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 114
    }
}


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Venky Shankar 2023-03-28 06:59:05 UTC
Neeraj, please create a redmine ticket upstream and link it here.


Note You need to log in before you can comment on or make changes to this bug.