Bug 2018906

Summary: [GSS] Stray daemon tcmu-runner is reported not managed by cephadm
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: hhuan
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Preethi <pnataraj>
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: medium    
Version: 5.0CC: adking, akraj, csharpe, fairytale, gjose, kdreyer, lithomas, mhackett, mmuench, rlepaksh, sbaldwin, tserlin, vereddy
Target Milestone: ---Keywords: Rebase
Target Release: 5.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-16.2.8-2.el8cp Doc Type: Bug Fix
Doc Text:
.The `tcmu-runner` daemons are no longer reported as stray daemons Previously, `tcmu-runner` daemons were not actively tracked by `cephadm` as they were considered part of iSCSI. This resulted in `tcmu-runner` daemons getting reported as stray daemons since `cephadm` was not tracking them. With this fix, when a `tcmu-runner` daemon matches up with a known iSCSI daemon, it is not marked as a stray daemon.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-09 17:36:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2102272    

Comment 7 Preethi 2022-05-31 14:27:36 UTC
Am not able to see the issue in the latest 5.2 ceph version
ceph version 16.2.8-22.el8cp

Tried the above steps as mentioned -
1. Install iscsi gateway
2. Configure iscsi target, block devices, added disks and exposed luns and ran IOs

ceph status-
[ceph: root@ceph-pnataraj-srzgvw-node1-installer /]# ceph status
  cluster:
    id:     0fc61ac6-e0a7-11ec-bec2-fa163e00b1a6
    health: HEALTH_OK
 
  services:
    mon:         3 daemons, quorum ceph-pnataraj-srzgvw-node1-installer,ceph-pnataraj-srzgvw-node2,ceph-pnataraj-srzgvw-node3 (age 5h)
    mgr:         ceph-pnataraj-srzgvw-node1-installer.zfkfme(active, since 5h), standbys: ceph-pnataraj-srzgvw-node2.suxjtu
    osd:         10 osds: 10 up (since 5h), 10 in (since 5h)
    tcmu-runner: 4 portals active (2 hosts)
 
  data:
    pools:   2 pools, 33 pgs
    objects: 9 objects, 17 KiB
    usage:   70 MiB used, 200 GiB / 200 GiB avail
    pgs:     33 active+clean
 
  io:
    client:   1.7 KiB/s rd, 1 op/s rd, 0 op/s wr

Comment 15 errata-xmlrpc 2022-08-09 17:36:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997