Bug 2130845 - [cee/sd][ceph-dashboard] ceph-dashboard is showing ISCSI Gateways as down after following the documentation
Summary: [cee/sd][ceph-dashboard] ceph-dashboard is showing ISCSI Gateways as down aft...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 5.3z1
Assignee: Nizamudeen
QA Contact: Sayalee
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-09-29 07:12 UTC by Geo Jose
Modified: 2023-02-28 10:06 UTC (History)
7 users (show)

Fixed In Version: ceph-16.2.10-121.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-02-28 10:06:16 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 48304 0 None open mgr/cephadm: iscsi username and password defaults to admin 2022-09-29 20:18:42 UTC
Red Hat Issue Tracker RHCEPH-5382 0 None None None 2022-09-29 07:24:51 UTC
Red Hat Product Errata RHSA-2023:0980 0 None None None 2023-02-28 10:06:51 UTC

Description Geo Jose 2022-09-29 07:12:14 UTC
Description of problem:
 - ceph-dashboard is showing ISCSI Gateways as "down".

Version-Release number of selected component (if applicable):
 - Red Hat Ceph Storage 5.x

How reproducible:

1. In RHCS 5.x cluster Install iscsi gateway as per the Documentation:
   https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/block_device_guide/the-ceph-iscsi-gateway#installing-the-ceph-iscsi-gateway-using-cli_block

2. Check the below command to verify whether the iscsi gateways are deployed or not
   ~~~~
   #ceph orch ls
   Ex:
   [ceph: root@ceph1 ceph]# ceph orch ls
   NAME                           PORTS        RUNNING  REFRESHED  AGE  PLACEMENT                 
   alertmanager                   ?:9093,9094      1/1  5m ago     3w   count:1;label:monitoring  
   crash                                           3/3  5m ago     3w   label:ceph                
   grafana                        ?:3000           1/1  5m ago     3w   count:1;label:monitoring  
   iscsi.iscsi                                     2/2  2m ago     2m   iscsi1;iscsi2                <<<        
   mds.test                                        1/1  5m ago     2w   ceph1;count:1    

   #ceph orch ps 
   Ex:
   [ceph: root@ceph1 ceph]# ceph orch ps
   NAME                                        HOST                  PORTS   STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION          IMAGE ID      CONTAINER ID  
   alertmanager.cephadmin                      cephadmin.redhat.com          running (3w)      6m ago   3w    30.3M        -  0.21.0           5fedc941a359  9a9483373953  
   crash.ceph1                                 ceph1                         running (3w)      6m ago   3w    7436k        -  16.2.8-85.el8cp  b2c997ff1898  29a4ac204dbc  
   crash.ceph2                                 ceph2                         running (4d)      6m ago   3w    12.0M        -  16.2.8-85.el8cp  b2c997ff1898  d1260863f5a5   
   crash.ceph3                                 ceph3                         running (3w)      6m ago   3w    1106k        -  16.2.8-85.el8cp  b2c997ff1898  ff03fe323f6a  
   grafana.cephadmin                           cephadmin.redhat.com          running (3w)      6m ago   3w    71.1M        -  8.3.5            a283f9df3197  09cf704031a0  
   iscsi.iscsi.iscsi1.cfsrwv                   iscsi1                        running (3m)      3m ago   3m    52.2M        -  3.5              b2c997ff1898  53fa320f5111  
   iscsi.iscsi.iscsi2.kewpgv                   iscsi2                        running (3m)      3m ago   3m    77.3M        -  3.5              b2c997ff1898  8d64e9a82239  
   ~~~

3. Login to iscsi node and check the iscsi containers are up and running:
   ~~~
   #podman ps | grep iscsi
   [root@iscsi1 8db99d94-5a7a-4f34-91b0-e5f74f47fb81]# podman ps | grep iscsi
   f8e86fc3e6f9  registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6                                                   --no-collector.ti...  11 days ago    Up 11 days ago                   ceph-8db99d94-5a7a-4f34-91b0-e5f74f47fb81-node-exporter-iscsi1
   30278ff8b6ba  registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:3075e8708792ebd527ca14849b6af4a11256a3f881ab09b837d7af0f8b2102ea                        4 minutes ago  Up 4 minutes ago              ceph-8db99d94-5a7a-4f34-91b0-e5f74f47fb81-iscsi-iscsi-iscsi1-cfsrwv-tcmu
   53fa320f5111  registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:3075e8708792ebd527ca14849b6af4a11256a3f881ab09b837d7af0f8b2102ea                        4 minutes ago  Up 4 minutes ago              ceph-8db99d94-5a7a-4f34-91b0-e5f74f47fb81-iscsi-iscsi-iscsi1-cfsrwv
   ~~~

4. log in to Dashboard and check the status of the iscsi gateway It shows the up status as Zero.


Actual results:
 - ceph-dashboard is showing ISCSI Gateways as "down".

Expected results:
 - ceph-dashboard should show ISCSI Gateways without any issues.

Comment 17 errata-xmlrpc 2023-02-28 10:06:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0980


Note You need to log in before you can comment on or make changes to this bug.