Description of problem: "ceph tell mds.<id> **" commands fails with RuntimeError exception. Commands related to OSD/MON working without any issues. [ubuntu@host028 ~]$ sudo ceph tell mds.1 injectargs "--debug-mds 20" --cluster qetest Traceback (most recent call last): File "/bin/ceph", line 1121, in <module> retval = main() File "/bin/ceph", line 1041, in main prefix='get_command_descriptions') File "/usr/lib/python2.7/site-packages/ceph_argparse.py", line 1346, in json_command raise RuntimeError('"{0}": exception {1}'.format(argdict, e)) RuntimeError: "None": exception "['{"prefix": "get_command_descriptions"}']": exception [Errno 2] error calling conf_read_file [ubuntu@host028 ~]$ sudo ceph tell mds.0 --cluster qetest client ls Traceback (most recent call last): File "/bin/ceph", line 1121, in <module> retval = main() File "/bin/ceph", line 1041, in main prefix='get_command_descriptions') File "/usr/lib/python2.7/site-packages/ceph_argparse.py", line 1346, in json_command raise RuntimeError('"{0}": exception {1}'.format(argdict, e)) RuntimeError: "None": exception "['{"prefix": "get_command_descriptions"}']": exception [Errno 2] error calling conf_read_file Version-Release number of selected component (if applicable): ceph: 12.2.0-1.el7cp (b661348f156f148d764b998b65b90451f096cb27) luminous (rc) How reproducible: 10/10 Steps to Reproduce: NA Actual results: NA Expected results: NA Additional info: NA
<pre> $ sudo ceph tell mds.1 injectargs "--debug-mds 20" --cluster qetest Traceback (most recent call last): File "/bin/ceph", line 1121, in <module> retval = main() File "/bin/ceph", line 1041, in main prefix='get_command_descriptions') File "/usr/lib/python2.7/site-packages/ceph_argparse.py", line 1346, in json_command raise RuntimeError('"{0}": exception {1}'.format(argdict, e)) RuntimeError: "None": exception "['{"prefix": "get_command_descriptions"}']": exception [Errno 2] error calling conf_read_file </pre> The "--cluster qetest" argument should go before "tell". You see this error when the ceph executable can't find the config file associated with that cluster name.
Ya that's a bug. Thanks! http://tracker.ceph.com/issues/21406
As Patrick says in http://tracker.ceph.com/issues/21406 it's only the --cluster argument that `tell mds` doesn't understand. To use `tell mds` with cluster having a non-default name, you can get around the issue for now by passing the corresponding conf file path argument, --conf <conf file path>
Moving this bug to verified state. Verified in ceph build: 12.2.1-44.el7cp (5bef20c3d60a27005cbe5e814d833cb18aa2335f) luminous (stable) command output: [root@host1 ~]# ceph tell mds.0 injectargs "--debug-mds 20" 2018-02-05 00:41:41.127707 7fbb457fa700 0 client.4248 ms_handle_reset on 10.70.39.9:6800/1535821975 2018-02-05 00:41:41.136793 7fbb467fc700 0 client.4249 ms_handle_reset on 10.70.39.9:6800/1535821975 debug_mds=20/20 [root@host ~]# ceph tell mds.0 client ls 2018-02-05 00:42:26.328886 7ff68cff9700 0 client.4253 ms_handle_reset on 10.70.39.9:6800/1535821975 2018-02-05 00:42:26.336165 7ff68dffb700 0 client.4254 ms_handle_reset on 10.70.39.9:6800/1535821975 [ { "id": 4232, "num_leases": 0, "num_caps": 2, "state": "open", "replay_requests": 0, "completed_requests": 0, "reconnecting": false, "inst": "client.4232 10.xx.xx.xx:0/2124990151", "client_metadata": { "ceph_sha1": "5bef20c3d60a27005cbe5e814d833cb18aa2335f", "ceph_version": "ceph version 12.2.1-44.el7cp (5bef20c3d60a27005cbe5e814d833cb18aa2335f) luminous (stable)", "entity_id": "admin", "hostname": "host.lab.eng.blr.redhat.com", "mount_point": "/mnt/fuse", "pid": "28234", "root": "/" } }, { "id": 4234, "num_leases": 0, "num_caps": 3, "state": "open", "replay_requests": 0, "completed_requests": 0, "reconnecting": false, "inst": "client.4234 10.xx.xx.xx:0/3709878253", "client_metadata": { "entity_id": "admin", "hostname": "host.lab.eng.blr.redhat.com", "kernel_version": "3.10.0-693.17.1.el7.x86_64" } } ]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0474