Bug 2089287 - Drop iSCSI from product
Summary: Drop iSCSI from product
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: iSCSI
Version: 6.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 6.0
Assignee: Ilya Dryomov
QA Contact: Preethi
Masauso Lungu
URL:
Whiteboard:
Depends On:
Blocks: 2126050
TreeView+ depends on / blocked
 
Reported: 2022-05-23 11:08 UTC by Ilya Dryomov
Modified: 2023-03-21 08:48 UTC (History)
12 users (show)

Fixed In Version: ceph-17.2.3-1.el9cp
Doc Type: Removed functionality
Doc Text:
.RBD iSCSI gateway support is now retired. With this release onwards, RHCS will no longer ship with iSCSI gateway components. RBD iSCSI gateway has been dropped in favor of the future RBD NVMEoF gateway.
Clone Of:
Environment:
Last Closed: 2023-03-20 18:56:27 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4349 0 None None None 2022-05-23 11:17:12 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:57:02 UTC

Comment 1 RHEL Program Management 2022-05-23 11:08:18 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 3 Ken Dreyer (Red Hat) 2022-07-29 18:39:40 UTC
I've confirmed the iSCSI packages are not present in our 6.0 Tools Yum repo, nor in the rhceph container images.

Comment 6 Preethi 2022-08-18 14:23:17 UTC
Packages are removed in the build. However, we still see ceph orch help option listing the ISCSI options and let us to deploy the iscsi which needs to be removed from orch help option

Below snippet-
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# ceph orch apply iscsi help
Invalid command: missing required parameter api_user(<string>)
orch apply iscsi <pool> <api_user> <api_password> [<trusted_ip_list>] [<placement>] [--unmanaged] [--dry-run] [--format {plain|json|json-pretty|yaml|xml-pretty|xml}] [--no-overwrite] :  Scale an iSCSI service
Error EINVAL: invalid command
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# ceph orch apply iscsi help


[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# ceph orch apply iscsi test --placement="ceph-pnataraj-rf2yxf-node4,ceph-pnataraj-rf2yxf-node5" --trusted_ip_list="10.0.209.218,10.0.208.239" admin admin
Scheduled iscsi.test update...
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# ceph orch ls
NAME                       PORTS        RUNNING  REFRESHED  AGE  PLACEMENT                                              
alertmanager               ?:9093,9094      1/1  2m ago     82m  count:1                                                
crash                                       5/5  4m ago     82m  *                                                      
grafana                    ?:3000           1/1  2m ago     82m  count:1                                                
iscsi.test                                  0/2  -          3s   ceph-pnataraj-rf2yxf-node4;ceph-pnataraj-rf2yxf-node5  
mgr                                         2/2  3m ago     78m  label:mgr                                              
mon                                         5/5  4m ago     82m  count:5                                                
node-exporter              ?:9100           5/5  4m ago     82m  *                                                      
osd.all-available-devices                    12  4m ago     71m  *                                                      
prometheus                 ?:9095           1/1  2m ago     82m  count:1                                                
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# 


[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# ceph health detail
HEALTH_WARN 2 failed cephadm daemon(s)
[WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
    daemon iscsi.test.ceph-pnataraj-rf2yxf-node4.yviswu on ceph-pnataraj-rf2yxf-node4 is in error state
    daemon iscsi.test.ceph-pnataraj-rf2yxf-node5.zwjrom on ceph-pnataraj-rf2yxf-node5 is in error state
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# 


[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# ceph orch ps
NAME                                                HOST                                  PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION         IMAGE ID      CONTAINER ID  
alertmanager.ceph-pnataraj-rf2yxf-node1-installer   ceph-pnataraj-rf2yxf-node1-installer  *:9093,9094  running (77m)     3m ago  82m    20.1M        -                  ba2b418f427c  921b972848c2  
crash.ceph-pnataraj-rf2yxf-node1-installer          ceph-pnataraj-rf2yxf-node1-installer               running (82m)     3m ago  82m    6581k        -  17.2.3-9.el9cp  9bd0ac2bfeb8  1093997d8822  
crash.ceph-pnataraj-rf2yxf-node2                    ceph-pnataraj-rf2yxf-node2                         running (79m)     4m ago  79m    6581k        -  17.2.3-9.el9cp  9bd0ac2bfeb8  a7175df4e235  
crash.ceph-pnataraj-rf2yxf-node3                    ceph-pnataraj-rf2yxf-node3                         running (78m)     5m ago  78m    6581k        -  17.2.3-9.el9cp  9bd0ac2bfeb8  0ea28bb79ec9  
crash.ceph-pnataraj-rf2yxf-node4                    ceph-pnataraj-rf2yxf-node4                         running (78m)    37s ago  78m    6585k        -  17.2.3-9.el9cp  9bd0ac2bfeb8  a9722485cd7b  
crash.ceph-pnataraj-rf2yxf-node5                    ceph-pnataraj-rf2yxf-node5                         running (78m)    37s ago  78m    6581k        -  17.2.3-9.el9cp  9bd0ac2bfeb8  ca6b16aa35bb  
grafana.ceph-pnataraj-rf2yxf-node1-installer        ceph-pnataraj-rf2yxf-node1-installer  *:3000       running (81m)     3m ago  82m    50.7M        -  8.3.5           dad864ee21e9  08be5bf3b7a6  
iscsi.test.ceph-pnataraj-rf2yxf-node4.yviswu        ceph-pnataraj-rf2yxf-node4                         error            37s ago  43s        -        -  <unknown>       <unknown>     <unknown>     
iscsi.test.ceph-pnataraj-rf2yxf-node5.zwjrom        ceph-pnataraj-rf2yxf-node5                         error            37s ago  41s        -        -  <unknown>       <unknown>     <unknown>     
mgr.ceph-pnataraj-rf2yxf-node1-installer.vwydac     ceph-pnataraj-rf2yxf-node1-installer  *:9283       running (83m)     3m ago  83m     466M        -  17.2.3-9.el9cp  9bd0ac2bfeb8  83f2ad6e83ab  
mgr.ceph-pnataraj-rf2yxf-node2.bovced               ceph-pnataraj-rf2yxf-node2            *:8443,9283  running (79m)     4m ago  79m     401M        -  17.2.3-9.el9cp  9bd0ac2bfeb8  21669c28070f  
mon.ceph-pnataraj-rf2yxf-node1-installer            ceph-pnataraj-rf2yxf-node1-installer               running (83m)     3m ago  83m     100M    2048M  17.2.3-9.el9cp  9bd0ac2bfeb8  15faf4b06cca  
mon.ceph-pnataraj-rf2yxf-node2                      ceph-pnataraj-rf2yxf-node2                         running (79m)     4m ago  79m    87.6M    2048M  17.2.3-9.el9cp  9bd0ac2bfeb8  08af0508d14d  
mon.ceph-pnataraj-rf2yxf-node3                      ceph-pnataraj-rf2yxf-node3                         running (78m)     5m ago  78m    89.7M    2048M  17.2.3-9.el9cp  9bd0ac2bfeb8  718f8a1bcc02  
mon.ceph-pnataraj-rf2yxf-node4                      ceph-pnataraj-rf2yxf-node4                         running (78m)    37s ago  78m    89.9M    2048M  17.2.3-9.el9cp  9bd0ac2bfeb8  beb759f0a9a3  
mon.ceph-pnataraj-rf2yxf-node5                      ceph-pnataraj-rf2yxf-node5                         running (78m)    37s ago  78m    90.9M    2048M  17.2.3-9.el9cp  9bd0ac2bfeb8  13ed0be9136a  
node-exporter.ceph-pnataraj-rf2yxf-node1-installer  ceph-pnataraj-rf2yxf-node1-installer  *:9100       running (82m)     3m ago  82m    17.2M        -                  1dbe0e931976  6f7f1d9a56f2  
node-exporter.ceph-pnataraj-rf2yxf-node2            ceph-pnataraj-rf2yxf-node2            *:9100       running (79m)     4m ago  79m    17.1M        -                  1dbe0e931976  6cfeb71eedad  
node-exporter.ceph-pnataraj-rf2yxf-node3            ceph-pnataraj-rf2yxf-node3            *:9100       running (78m)     5m ago  78m    17.2M        -                  1dbe0e931976  26e535bc8203  
node-exporter.ceph-pnataraj-rf2yxf-node4            ceph-pnataraj-rf2yxf-node4            *:9100       running (78m)    37s ago  78m    15.4M        -                  1dbe0e931976  bda16f393797  
node-exporter.ceph-pnataraj-rf2yxf-node5            ceph-pnataraj-rf2yxf-node5            *:9100       running (77m)    37s ago  77m    18.9M        -                  1dbe0e931976  07c6dd2519cf  
osd.0                                               ceph-pnataraj-rf2yxf-node3                         running (71m)     5m ago  71m     183M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  9d47a0bd9481  
osd.1                                               ceph-pnataraj-rf2yxf-node5                         running (71m)    37s ago  71m     205M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  72aaa36da7ba  
osd.2                                               ceph-pnataraj-rf2yxf-node4                         running (71m)    37s ago  71m     169M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  9a64ada6326e  
osd.3                                               ceph-pnataraj-rf2yxf-node3                         running (71m)     5m ago  71m     223M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  004d598878a4  
osd.4                                               ceph-pnataraj-rf2yxf-node5                         running (71m)    37s ago  71m     124M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  3f1e163c6181  
osd.5                                               ceph-pnataraj-rf2yxf-node4                         running (71m)    37s ago  71m     191M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  2b4f907347ef  
osd.6                                               ceph-pnataraj-rf2yxf-node3                         running (71m)     5m ago  71m     181M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  c48b5eda9b05  
osd.7                                               ceph-pnataraj-rf2yxf-node5                         running (71m)    37s ago  71m     202M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  ebf03dbaf4fc  
osd.8                                               ceph-pnataraj-rf2yxf-node4                         running (71m)    37s ago  71m     137M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  dbefa4b3b1fd  
osd.9                                               ceph-pnataraj-rf2yxf-node3                         running (71m)     5m ago  71m     228M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  0809c412e2b8  
osd.10                                              ceph-pnataraj-rf2yxf-node5                         running (71m)    37s ago  71m     183M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  a6d7a05e210d  
osd.11                                              ceph-pnataraj-rf2yxf-node4                         running (71m)    37s ago  71m     106M    4096M  17.2.3-9.el9cp  9bd0ac2bfeb8  b3fd53627d3a  
prometheus.ceph-pnataraj-rf2yxf-node1-installer     ceph-pnataraj-rf2yxf-node1-installer  *:9095       running (77m)     3m ago  82m    88.1M        -                  514e6a882f6e  dc0d3c64cd9c  
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# 


gateway node -

[root@ceph-pnataraj-rf2yxf-node4 cephuser]# systemctl -l | grep iscsi*
● ceph-66e3bae2-1eb5-11ed-a0f8-fa163e15309f.ceph-pnataraj-rf2yxf-node4.yviswu.service                   loaded failed failed    Ceph iscsi.test.ceph-pnataraj-rf2yxf-node4.yviswu for 66e3bae2-1eb5-11ed-a0f8-fa163e15309f
[root@ceph-pnataraj-rf2yxf-node4 cephuser]#




ceph versins
[ceph: root@ceph-pnataraj-rf2yxf-node1-installer /]# ceph version
ceph version 17.2.3-9.el9cp (ec1f163818fab0b1a8a98bfe1ec5c949373b0e6d) quincy (stable)

Comment 9 Preethi 2022-08-20 02:43:02 UTC
Sure. Once we get confirmation we can close the BZ as expected and document the same.

Comment 10 Preethi 2022-09-07 12:15:34 UTC
FYI, We also dont see any warnings or any message about ISCSI feature deprecating as cli still allow users to deploy in 6.0 though services wont come up or failed to come up.

Comment 11 Preethi 2022-09-09 03:12:36 UTC
Ceph status reports post Upgrade from 5.2 to 6.0. We should have a warning message to the users regarding this.

[root@magna021 pnataraj]# ceph status
  cluster:
    id:     c8ce6d50-c0a1-11ec-a99b-002590fc2a2e
    health: HEALTH_WARN
            2 failed cephadm daemon(s)
 
  services:
    mon:        5 daemons, quorum magna021,magna022,magna024,magna025,magna026 (age 20h)
    mgr:        magna022.icxgsh(active, since 20h), standbys: magna021.syfuos
    osd:        52 osds: 52 up (since 19h), 52 in (since 2d)
    rbd-mirror: 1 daemon active (1 hosts)
 
  data:
    pools:   17 pools, 1569 pgs
    objects: 2.39M objects, 9.1 TiB
    usage:   27 TiB used, 20 TiB / 48 TiB avail
    pgs:     1569 active+clean
 
  io:
    client:   520 KiB/s rd, 34 KiB/s wr, 623 op/s rd, 64 op/s wr
 
[root@magna021 pnataraj]# ceph health detail
HEALTH_WARN 2 failed cephadm daemon(s)
[WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
    daemon iscsi.test.plena001.konnne on plena001 is in error state
    daemon iscsi.test.plena002.wgcgle on plena002 is in error state
[root@magna021 pnataraj]#

Comment 14 Preethi 2022-09-26 08:38:28 UTC
@IIya, Yes, Issue was filed for dashboard and verified. ISCSI section is removed form dashboard completely.

Comment 15 Preethi 2022-09-26 09:16:26 UTC
Issue is fixed in the latest RHCS 6.0 build

Below are the snippet

[ceph: root@magna021 /]# ceph orch upgrade status
{
    "target_image": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:9067726d198edafd890e84376796de613d2f63221374d104078b8a0ceec7c529",
    "in_progress": true,
    "which": "Upgrading all daemon types on all hosts",
    "services_complete": [],
    "progress": "1/89 daemons upgraded",
    "message": "Error: UPGRADE_ISCSI_UNSUPPORTED: Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present"
}
[ceph: root@magna021 /]# ceph orch ls      
NAME                       PORTS        RUNNING  REFRESHED  AGE  PLACEMENT                                     
alertmanager               ?:9093,9094      1/1  2m ago     5M   count:1                                       
crash                                     12/12  2m ago     5M   *                                             
grafana                    ?:3000           1/1  2m ago     5M   count:1                                       
iscsi.test                                  0/2  103s ago   3M   plena001;plena002                             
mgr                                         2/2  2m ago     5M   count:2                                       
mon                                         5/5  2m ago     4M   magna021;magna022;magna024;magna025;magna026  
node-exporter              ?:9100         12/12  2m ago     5w   count:12                                      
osd                                          38  2m ago     -    <unmanaged>                                   
osd.all-available-devices                    14  2m ago     2w   <unmanaged>                                   
prometheus                 ?:9095           1/1  2m ago     4w   count:1                                       
rbd-mirror                                  1/1  2m ago     4M   magna026   


[root@magna021 yum.repos.d]# ceph health detail
HEALTH_ERR 2 failed cephadm daemon(s); Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present
[WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
    daemon iscsi.test.plena001.konnne on plena001 is in error state
    daemon iscsi.test.plena002.wgcgle on plena002 is in error state
[ERR] UPGRADE_ISCSI_UNSUPPORTED: Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present
    Iscsi is no longer supported in RHCS 6.
    Please remove any iscsi services/daemons from the cluster before upgrading.
    If you instead would rather keep using iscsi than upgrade, please manually downgrade any
    upgraded daemons with `ceph orch daemon redeploy <daemon-name> --image <previous-5.x-image-name>`
[root@magna021 yum.repos.d]# 



[root@magna021 yum.repos.d]# ceph health detail
HEALTH_ERR 2 failed cephadm daemon(s); Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present
[WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
    daemon iscsi.test.plena001.konnne on plena001 is in error state
    daemon iscsi.test.plena002.wgcgle on plena002 is in error state
[ERR] UPGRADE_ISCSI_UNSUPPORTED: Upgrade attempted to RHCS release not supporting iscsi with iscsi daemons present
    Iscsi is no longer supported in RHCS 6.
    Please remove any iscsi services/daemons from the cluster before upgrading.
    If you instead would rather keep using iscsi than upgrade, please manually downgrade any
    upgraded daemons with `ceph orch daemon redeploy <daemon-name> --image <previous-5.x-image-name>`
[root@magna021 yum.repos.d]# 



cephadm version-
ceph-17.2.3-39.el9cp
ceph version
ceph-17.2.3-39.el9cp

Will move the BZ to verified state. Documentation for this is tracked in JIRA ticket https://issues.redhat.com/browse/RHCEPH-4456

Comment 16 Preethi 2022-09-26 09:19:05 UTC
Will keep the BZ open until the BZ is documented.

Comment 19 Preethi 2022-09-26 09:33:14 UTC
Creating doc BZ separately for tracking. Hence, moving this to verified state.

Comment 24 Preethi 2022-09-28 04:24:44 UTC
Am ok with the doc text. Clearing the need info as there is no ask explicitly.

Comment 39 errata-xmlrpc 2023-03-20 18:56:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.