Bug 2055616 - [cee/sd][iscsi] rbd-target-api daemon is in activating state.
Summary: [cee/sd][iscsi] rbd-target-api daemon is in activating state.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: iSCSI
Version: 4.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 5.1z1
Assignee: Xiubo Li
QA Contact: Preethi
Mary Frances Hull
URL:
Whiteboard:
Depends On:
Blocks: 2085458
TreeView+ depends on / blocked
 
Reported: 2022-02-17 11:33 UTC by Lijo Stephen Thomas
Modified: 2023-02-06 16:30 UTC (History)
9 users (show)

Fixed In Version: ceph-iscsi-3.5-3.el8cp
Doc Type: Bug Fix
Doc Text:
.The `rbd-target-api` daemon starts successfully when creating the disks Previously, the disk information in the `gateway.conf` object would be partially configured due to some failures when creating the disks and would not start the `rbd-target-api` daemon and cause it to be in an activating state indefinitely. With this release, when the `rbd-target-api` service is starting, it will either recover the disk information from the existing LIO devices or skip it. This starts the `rbd-target-api` service as expected.
Clone Of:
Environment:
Last Closed: 2022-05-18 10:38:15 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-3188 0 None None None 2022-02-17 11:36:52 UTC
Red Hat Product Errata RHBA-2022:4622 0 None None None 2022-05-18 10:38:36 UTC

Comment 16 Preethi 2022-04-18 05:10:28 UTC
Hi Xiubo Li,

As we have entered Code freeze/blockers only stage for 5.1z1, can you please let us know when QE can expect this BZ to be ON_QA ?

Regards,
Preethi

Comment 22 Preethi 2022-04-22 10:50:38 UTC
Below are the steps followed to verify the issue and no issues seen. Moving to verified state.

1, create an image, name it `image3` via `gwcli`
2, delete the image from the test pool directly by using `rados -p test rm test.image3`
3, delete the whole image config from the disks: using the below

rados -p iscsipool get gateway.conf blah2.txt
read the contents of blah2.txt and make changes to file as suggested
rados -p iscsipool put gateway.conf blah2.txt

4, run gwcli ls, the expected result is it won't show the `image3` any more.
5, and restart rbd-target-api service or conatiners, the `gwcli ls` should the same with Step4.
6, try to read the `gateway.conf` object to see there shouldn't have partial config for the `image3`, something likes:

 "test/image3": {
            "created": "2022/02/04 15:22:54"
        },



Output from gwcli -
Warning: Could not load preferences file /root/.gwcli/prefs.bin.
o- / ......................................................................................................................... [...]
  o- cluster ......................................................................................................... [Clusters: 1]
  | o- ceph ............................................................................................................ [HEALTH_OK]
  |   o- pools .......................................................................................................... [Pools: 4]
  |   | o- .rgw.root ............................................................. [(x3), Commit: 0.00Y/66272684K (0%), Used: 0.00Y]
  |   | o- device_health_metrics ................................................. [(x3), Commit: 0.00Y/66272684K (0%), Used: 0.00Y]
  |   | o- iscsipool ............................................................... [(x3), Commit: 0.00Y/66272684K (0%), Used: 48K]
  |   | o- test .................................................................... [(x3), Commit: 0.00Y/66272684K (0%), Used: 84K]
  |   o- topology ............................................................................................... [OSDs: 10,MONs: 3]
  o- disks ....................................................................................................... [0.00Y, Disks: 0]
  o- iscsi-targets ............................................................................... [DiscoveryAuth: None, Targets: 1]
    o- iqn.2003-01.com.redhat.iscsi-gw:ceph-igw2 ......................................................... [Auth: None, Gateways: 2]
      o- disks .......................................................................................................... [Disks: 0]
      o- gateways ............................................................................................ [Up: 2/2, Portals: 2]
      | o- ceph-iscsi-preethi-4ovirz-node4 ...................................................................... [10.0.211.24 (UP)]
      | o- ceph-iscsi-preethi-4ovirz-node5 ..................................................................... [10.0.210.189 (UP)]
      o- host-groups .................................................................................................. [Groups : 0]
      o- hosts ....................................................................................... [Auth: ACL_ENABLED, Hosts: 0]


output from the file -

[root@ceph-iscsi-preethi-4ovirz-node1-installer cephuser]# rados -p iscsipool get gateway.conf blah2.txt
[root@ceph-iscsi-preethi-4ovirz-node1-installer cephuser]# cat blah2.txt 
{
    "created": "2022/04/22 03:16:10",
    "discovery_auth": {
        "mutual_password": "",
        "mutual_password_encryption_enabled": false,
        "mutual_username": "",
        "password": "",
        "password_encryption_enabled": false,
        "username": ""
    },
    "disks": {},
    "epoch": 5,
    "gateways": {
        "ceph-iscsi-preethi-4ovirz-node4": {
            "active_luns": 0,
            "created": "2022/04/22 03:18:33",
            "updated": "2022/04/22 03:18:33"
        },
        "ceph-iscsi-preethi-4ovirz-node5": {
            "active_luns": 0,
            "created": "2022/04/22 03:18:44",
            "updated": "2022/04/22 03:18:44"
        }
    },




[root@ceph-iscsi-preethi-4ovirz-node4 /]# rpm -qa | grep iscsi*
ceph-iscsi-3.5-3.el8cp.noarch
[root@ceph-iscsi-preethi-4ovirz-node4 /]# 

[root@ceph-pnataraj-hkpq9z-node1-installer cephuser]# ceph version
ceph version 16.2.7-107.el8cp (3106079e34bb001fa0999e9b975bd5e8a413f424) pacific (stable)
[root@ceph-pnataraj-hkpq9z-node1-installer cephuser]#

Comment 27 errata-xmlrpc 2022-05-18 10:38:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:4622


Note You need to log in before you can comment on or make changes to this bug.