.Improved `tcmu-runner` section in the `ceph status` output
Previously, each iSCSI LUN was listed individually resulting in cluttering the `ceph status` output.
With this release, the `ceph status` command summarizes the report and shows only the number of active portals and the number of hosts.
* Description of problem:
When we run the `ceph -s` command to see the cluster health and stats, it currently also lists every single iscsigw-pool-disks.
If we have +50 iSCSI volumes, it becomes very difficult to see what is going on.
--------------
# ceph -s
cluster:
id: 554214df-ddf6-4f79-96a9-75770e8cbeda
health: HEALTH_WARN
too few PGs per OSD (4 < min 30)
services:
mon: 3 daemons, quorum ceph4node102,ceph4node100,ceph4node101 (age 4h)
mgr: ceph4node101(active, since 2h), standbys: ceph4node102, ceph4node100
osd: 6 osds: 6 up (since 4h), 6 in (since 4h)
**tcmu-runner: 4 daemons active (ceph4admin4.example.com:rbd/disk_1, ceph4admin4.example.com:rbd/test, ceph4node100.example.com:rbd/disk_1, ceph4node100.example.com:rbd/test)**
--------------
I believe we should have a separate command to list the iscsi-disks rather than adding it in the ceph status output. tcmu-runner section should only show the active iscsi gateways.
* Version-Release number of selected component (if applicable):
RHCS 4.1
* How reproducible:
Always
Steps to Reproduce:
1. Setup ISCSI gateways
2. Add many disks as per doc [1]
3. Run ceph -s
[1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/block_device_guide/index#configuring-the-iscsi-target-using-the-cli_block
@Harish,
No, the `ceph -s` show detail has been changed later just after my fixing about this, with Sage's following patches it won't list individual daemon ids/hosts as this won't scale for larger clusters, more detail please see https://github.com/ceph/ceph/pull/40220.
With Sage's latest ceph code, the following output is expected:
services:
mon: 1 daemons, quorum a (age 4m)
mgr: x(active, since 3m)
osd: 1 osds: 1 up (since 3m), 1 in (since 3m)
cephfs-mirror: 1 daemon active (1 hosts)
rbd-mirror: 2 daemons active (1 hosts)
rgw: 2 daemons active (1 hosts, 1 zones)
This is also the same with your test. And your test should be fine.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2021:3294
* Description of problem: When we run the `ceph -s` command to see the cluster health and stats, it currently also lists every single iscsigw-pool-disks. If we have +50 iSCSI volumes, it becomes very difficult to see what is going on. -------------- # ceph -s cluster: id: 554214df-ddf6-4f79-96a9-75770e8cbeda health: HEALTH_WARN too few PGs per OSD (4 < min 30) services: mon: 3 daemons, quorum ceph4node102,ceph4node100,ceph4node101 (age 4h) mgr: ceph4node101(active, since 2h), standbys: ceph4node102, ceph4node100 osd: 6 osds: 6 up (since 4h), 6 in (since 4h) **tcmu-runner: 4 daemons active (ceph4admin4.example.com:rbd/disk_1, ceph4admin4.example.com:rbd/test, ceph4node100.example.com:rbd/disk_1, ceph4node100.example.com:rbd/test)** -------------- I believe we should have a separate command to list the iscsi-disks rather than adding it in the ceph status output. tcmu-runner section should only show the active iscsi gateways. * Version-Release number of selected component (if applicable): RHCS 4.1 * How reproducible: Always Steps to Reproduce: 1. Setup ISCSI gateways 2. Add many disks as per doc [1] 3. Run ceph -s [1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/block_device_guide/index#configuring-the-iscsi-target-using-the-cli_block