Bug 2164077 - Error message is not descriptive for ceph tell command
Summary: Error message is not descriptive for ceph tell command
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 6.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 6.1z9
Assignee: Neeraj Pratap Singh
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On:
Blocks: 2356538 2356539
TreeView+ depends on / blocked
 
Reported: 2023-01-24 17:28 UTC by Amarnath
Modified: 2025-04-28 05:29 UTC (History)
9 users (show)

Fixed In Version: ceph-17.2.6-265.el9cp
Doc Type: Bug Fix
Doc Text:
.The ceph tell command now displays proper error messages for the wrong mds type Previously, the ceph tell command did not display a proper error message if the mds type was incorrect. As a result, the command failed with no error message, and it was difficult to understand what was wrong with the command. With this fix, the ceph tell command returns an appropriate error message, stating "unknown <type_name>" when an incorrect MDS type is used.
Clone Of:
: 2356538 2356539 (view as bug list)
Environment:
Last Closed: 2025-04-28 05:29:07 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 59624 0 None None None 2023-05-03 11:38:01 UTC
Red Hat Issue Tracker RHCEPH-6022 0 None None None 2023-01-24 17:28:54 UTC
Red Hat Product Errata RHSA-2025:4238 0 None None None 2025-04-28 05:29:21 UTC

Description Amarnath 2023-01-24 17:28:40 UTC
Description of problem:
Error message is not descriptive for ceph tell command
ceph tell command with only <mds_name> giving below error

[root@magna028 ~]# ceph tell cephfs_1.magna023.yvyxvl client ls
error handling command target: local variable 'poolid' referenced before assignment

the error message `local variable 'poolid' referenced before assignment` looks not  descriptive. looks like there was a miss in some code path. 

same command with mds.<mds_name> is working fine
[root@magna028 ~]# ceph tell mds.cephfs_1.magna023.yvyxvl client ls | grep "id"
2023-01-24T17:21:42.239+0000 7fcce27f4640  0 client.134465 ms_handle_reset on v2:10.8.128.23:6824/376621306
2023-01-24T17:21:42.381+0000 7fcce27f4640  0 client.78973 ms_handle_reset on v2:10.8.128.23:6824/376621306
        "id": 115809,
            "entity_id": "admin",


[root@magna028 ~]# ceph fs status
cephfs - 6 clients
======
RANK  STATE            MDS              ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  cephfs.argo017.kwzitt  Reqs:   10 /s   153k   153k  1666   1072   
 1    active  cephfs.argo021.aoxhrx  Reqs: 1399 /s  50.4k  50.3k   605   1171   
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata  3528M  51.9T  
cephfs.cephfs.data    data    15.2T  51.9T  
cephfs_1 - 1 clients
========
RANK  STATE             MDS                ACTIVITY     DNS    INOS   DIRS   CAPS  
 0    active  cephfs_1.magna023.yvyxvl  Reqs:   26 /s   158    159     67    147   
        POOL            TYPE     USED  AVAIL  
cephfs.cephfs_1.meta  metadata  98.8M  51.9T  
cephfs.cephfs_1.data    data    36.0k  51.9T  
      STANDBY MDS        
 cephfs.argo013.lovief   
cephfs_1.argo021.rvojoi  
 cephfs.argo020.nuhgsb   
MDS version: ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)


Version-Release number of selected component (if applicable):
[root@magna028 ~]# ceph versions
{
    "mon": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 5
    },
    "mgr": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 3
    },
    "osd": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 100
    },
    "mds": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 6
    },
    "overall": {
        "ceph version 17.2.5-58.el9cp (16442de8e6d5a0e3579858f2df1407d21043bc4b) quincy (stable)": 114
    }
}


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Venky Shankar 2023-03-28 06:59:05 UTC
Neeraj, please create a redmine ticket upstream and link it here.

Comment 20 errata-xmlrpc 2025-04-28 05:29:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 6.1 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2025:4238


Note You need to log in before you can comment on or make changes to this bug.