Bug 1847231 - [GSS][RFE] Improve 'tcmu-runner' section in ceph status output
Summary: [GSS][RFE] Improve 'tcmu-runner' section in ceph status output
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: iSCSI
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Xiubo Li
QA Contact: Harish Munjulur
Ranjini M N
URL:
Whiteboard:
Depends On:
Blocks: 1959686
TreeView+ depends on / blocked
 
Reported: 2020-06-16 02:55 UTC by Karun Josy
Modified: 2024-03-25 16:03 UTC (History)
16 users (show)

Fixed In Version: tcmu-runner-1.5.4-1.el8cp
Doc Type: Enhancement
Doc Text:
.Improved `tcmu-runner` section in the `ceph status` output Previously, each iSCSI LUN was listed individually resulting in cluttering the `ceph status` output. With this release, the `ceph status` command summarizes the report and shows only the number of active portals and the number of hosts.
Clone Of:
Environment:
Last Closed: 2021-08-30 08:25:38 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 49057 0 None None None 2021-01-29 13:37:23 UTC
Red Hat Issue Tracker RHCEPH-789 0 None None None 2021-08-19 16:41:41 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:25:50 UTC

Description Karun Josy 2020-06-16 02:55:03 UTC
* Description of problem:
When we run the `ceph -s` command to see the cluster health and stats, it currently also lists every single iscsigw-pool-disks. 
If we have +50 iSCSI volumes, it becomes very difficult to see what is going on.

--------------
# ceph -s
  cluster:
    id:     554214df-ddf6-4f79-96a9-75770e8cbeda
    health: HEALTH_WARN
            too few PGs per OSD (4 < min 30)
 
  services:
    mon:         3 daemons, quorum ceph4node102,ceph4node100,ceph4node101 (age 4h)
    mgr:         ceph4node101(active, since 2h), standbys: ceph4node102, ceph4node100
    osd:         6 osds: 6 up (since 4h), 6 in (since 4h)

**tcmu-runner: 4 daemons active (ceph4admin4.example.com:rbd/disk_1, ceph4admin4.example.com:rbd/test, ceph4node100.example.com:rbd/disk_1, ceph4node100.example.com:rbd/test)**
--------------

I believe we should have a separate command to list the iscsi-disks rather than adding it in the ceph status output. tcmu-runner section should only show the active iscsi gateways.

* Version-Release number of selected component (if applicable):
RHCS 4.1

* How reproducible:
Always

Steps to Reproduce:
1. Setup ISCSI gateways
2. Add many disks as per doc [1]
3. Run ceph -s

[1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/block_device_guide/index#configuring-the-iscsi-target-using-the-cli_block

Comment 15 Harish Munjulur 2021-05-06 10:15:24 UTC
QA failed - missing target names

ceph version 16.2.0-22.el8cp

Expected:
    tcmu-runner: 2 portals active (target1, target2)


observed:
services:
    mon:         3 daemons, quorum magna031,magna032,magna006 (age 2d)
    mgr:         magna032.xmjzla(active, since 23h), standbys: magna006.sarymt, magna031.maadtx
    mds:         2/2 daemons up, 5 standby
    osd:         30 osds: 30 up (since 2d), 30 in (since 5d)
    rbd-mirror:  1 daemon active (1 hosts)
    tcmu-runner: 2 portals active (2 hosts) - missing target names

Comment 16 Xiubo Li 2021-05-06 12:45:03 UTC
@Harish,

No, the `ceph -s` show detail has been changed later just after my fixing about this, with Sage's following patches it won't list individual daemon ids/hosts as this won't scale for larger clusters, more detail please see https://github.com/ceph/ceph/pull/40220.

With Sage's latest ceph code, the following output is expected:

  services:
    mon:           1 daemons, quorum a (age 4m)
    mgr:           x(active, since 3m)
    osd:           1 osds: 1 up (since 3m), 1 in (since 3m)
    cephfs-mirror: 1 daemon active (1 hosts)
    rbd-mirror:    2 daemons active (1 hosts)
    rgw:           2 daemons active (1 hosts, 1 zones)

This is also the same with your test. And your test should be fine.

Comment 17 Harish Munjulur 2021-05-07 09:30:24 UTC
Thanks for sharing the latest changes. I will go ahead and move to verified state.

Comment 24 errata-xmlrpc 2021-08-30 08:25:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.