Bug 2018906 - [GSS] Stray daemon tcmu-runner is reported not managed by cephadm
Summary: [GSS] Stray daemon tcmu-runner is reported not managed by cephadm
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 5.2
Assignee: Adam King
QA Contact: Preethi
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2102272
TreeView+ depends on / blocked
 
Reported: 2021-11-01 08:40 UTC by hhuan
Modified: 2024-12-20 21:31 UTC (History)
13 users (show)

Fixed In Version: ceph-16.2.8-2.el8cp
Doc Type: Bug Fix
Doc Text:
.The `tcmu-runner` daemons are no longer reported as stray daemons Previously, `tcmu-runner` daemons were not actively tracked by `cephadm` as they were considered part of iSCSI. This resulted in `tcmu-runner` daemons getting reported as stray daemons since `cephadm` was not tracking them. With this fix, when a `tcmu-runner` daemon matches up with a known iSCSI daemon, it is not marked as a stray daemon.
Clone Of:
Environment:
Last Closed: 2022-08-09 17:36:41 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-2140 0 None None None 2021-11-01 08:41:40 UTC
Red Hat Product Errata RHSA-2022:5997 0 None None None 2022-08-09 17:37:11 UTC

Comment 7 Preethi 2022-05-31 14:27:36 UTC
Am not able to see the issue in the latest 5.2 ceph version
ceph version 16.2.8-22.el8cp

Tried the above steps as mentioned -
1. Install iscsi gateway
2. Configure iscsi target, block devices, added disks and exposed luns and ran IOs

ceph status-
[ceph: root@ceph-pnataraj-srzgvw-node1-installer /]# ceph status
  cluster:
    id:     0fc61ac6-e0a7-11ec-bec2-fa163e00b1a6
    health: HEALTH_OK
 
  services:
    mon:         3 daemons, quorum ceph-pnataraj-srzgvw-node1-installer,ceph-pnataraj-srzgvw-node2,ceph-pnataraj-srzgvw-node3 (age 5h)
    mgr:         ceph-pnataraj-srzgvw-node1-installer.zfkfme(active, since 5h), standbys: ceph-pnataraj-srzgvw-node2.suxjtu
    osd:         10 osds: 10 up (since 5h), 10 in (since 5h)
    tcmu-runner: 4 portals active (2 hosts)
 
  data:
    pools:   2 pools, 33 pgs
    objects: 9 objects, 17 KiB
    usage:   70 MiB used, 200 GiB / 200 GiB avail
    pgs:     33 active+clean
 
  io:
    client:   1.7 KiB/s rd, 1 op/s rd, 0 op/s wr

Comment 15 errata-xmlrpc 2022-08-09 17:36:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997


Note You need to log in before you can comment on or make changes to this bug.