Bug 2099470 - [iscsi]- Adding/expanding iscsi gateways in gwcli to the existing is failed saying "Failed : /etc/ceph/iscsi-gateway.cfg on ceph-52-iscsifix-bcb6z****** does not match the local version. Correct and retry request"
Summary: [iscsi]- Adding/expanding iscsi gateways in gwcli to the existing is failed s...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.3z1
Assignee: Adam King
QA Contact: Preethi
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2092397 2126049
TreeView+ depends on / blocked
 
Reported: 2022-06-21 04:44 UTC by Preethi
Modified: 2023-02-28 10:06 UTC (History)
9 users (show)

Fixed In Version: ceph-16.2.10-113.el8cp
Doc Type: Known Issue
Doc Text:
.Adding or expanding iSCSI gateways in `gwcli` fails due to mismatch of `iscsi-gateway.cfg`across the iSCSI daemons Due to iSCSI daemons not being reconfigured automatically when a trusted IP list is updated in the specification file, adding or expanding iSCSI gateways in `gwcli` fails due to the `iscsi-gateway.cfg` not matching across the iSCSI daemons. As a workaround, users must execute `ceph orch reconfig _ISCSI_SERVICE_NAME_`` command to reconfigure all iSCSI daemons in the service and update the `iscsi-gateway.cfg` file. Upon executing the command, the failure can be avoided, unless the trusted IP list is updated again, in which case you must execute the command again.
Clone Of:
Environment:
Last Closed: 2023-02-28 10:05:14 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4589 0 None None None 2022-06-21 04:48:59 UTC
Red Hat Product Errata RHSA-2023:0980 0 None None None 2023-02-28 10:06:04 UTC

Description Preethi 2022-06-21 04:44:50 UTC
Description of problem:
[iscsi]- Adding/expanding iscsi gateways to the existing is failed with Failed : /etc/ceph/iscsi-gateway.cfg on ceph-52-iscsifix-bcb6z****** does not match the local version. Correct and retry request"

Deployment was successful from 2 iscsigateway nodes to 4 iscsi gateway nodes

Version-Release number of selected component (if applicable):
ceph version 16.2.8-46.el8cp

How reproducible:


Steps to Reproduce:
1. Deploy 2  iscsi gateway nodes, configure target, client and luns in gwcli
2. Expand the gateway nodes using ceph orch command
3. Log into the primary gateway node and enter to gwcli and try adding the new gateway nodes and observe the behaviour


Actual results: New added gateway nodes should be added successful

New gateway nodesare -
ceph-52-iscsifix-bcb6zp-node2.novalocal  10.0.210.60   
                         
ceph-52-iscsifix-bcb6zp-node3            10.0.210.106                        
  
Primary gateway nodes which was added before expansion-
ceph-52-iscsifix-bcb6zp-node4            10.0.209.77   mds osd                           
ceph-52-iscsifix-bcb6zp-node5            10.0.211.180  mds osd

Expected results: We are unable to add the newly added gateway nodes to the existing target


Snippet-

/iscsi-target...scsi/gateways> create ceph-52-iscsifix-bcb6zp-node3 10.0.210.106
Adding gateway, sync'ing 1 disk(s) and 1 client(s)
Failed : /etc/ceph/iscsi-gateway.cfg on ceph-52-iscsifix-bcb6zp-node3 does not match the local version. Correct and retry request
/iscsi-target...scsi/gateways>


Additional info:
[root@ceph-52-iscsifix-bcb6zp-node6 cephuser]# ceph orch host ls
HOST                                     ADDR          LABELS                    STATUS  
ceph-52-iscsifix-bcb6zp-node1-installer  10.0.209.233  _admin mgr installer mon          
ceph-52-iscsifix-bcb6zp-node2.novalocal  10.0.210.60   mgr mon                           
ceph-52-iscsifix-bcb6zp-node3            10.0.210.106  osd mon                           
ceph-52-iscsifix-bcb6zp-node4            10.0.209.77   mds osd                           
ceph-52-iscsifix-bcb6zp-node5            10.0.211.180  mds osd                           
5 hosts in cluster
[root@ceph-52-iscsifix-bcb6zp-node6 cephuser]# ceph orch ls
NAME                       PORTS  RUNNING  REFRESHED  AGE  PLACEMENT                                                                                                                          
iscsi.iscsipool                       4/4  7m ago     3d   ceph-52-iscsifix-bcb6zp-node4;ceph-52-iscsifix-bcb6zp-node5;ceph-52-iscsifix-bcb6zp-node3;ceph-52-iscsifix-bcb6zp-node2.novalocal  
mgr                                   2/2  2m ago     4d   label:mgr                                                                                                                          
mon                                   3/3  7m ago     4d   label:mon                                                                                                                          
osd.all-available-devices              10  7m ago     4d   *                                                                                                                                  
[root@ceph-52-iscsifix-bcb6zp-node6 cephuser]# ceph status
  cluster:
    id:     131dc8ca-ed3d-11ec-8446-fa163e2ee952
    health: HEALTH_WARN
            Failed to apply 1 service(s): osd.all-available-devices
 
  services:
    mon:         3 daemons, quorum ceph-52-iscsifix-bcb6zp-node1-installer,ceph-52-iscsifix-bcb6zp-node2,ceph-52-iscsifix-bcb6zp-node3 (age 60m)
    mgr:         ceph-52-iscsifix-bcb6zp-node1-installer.fzkgtd(active, since 3d), standbys: ceph-52-iscsifix-bcb6zp-node2.pzikhd
    osd:         10 osds: 10 up (since 60m), 10 in (since 4d)
    tcmu-runner: 2 portals active (2 hosts)
 
  data:
    pools:   2 pools, 129 pgs
    objects: 4.66k objects, 18 GiB
    usage:   54 GiB used, 146 GiB / 200 GiB avail
    pgs:     129 active+clean
 
  io:
    client:   119 KiB/s rd, 9 op/s rd, 0 op/s wr
 
[root@ceph-52-iscsifix-bcb6zp-node6 cephuser]# 


/etc/ceph/iscsigateway.cfg

node 4-
[root@ceph-52-iscsifix-bcb6zp-node4 /]# cat /etc/ceph/iscsi-gateway.cfg 
# This file is generated by cephadm.
[config]
cluster_client_name = client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
pool = iscsipool
trusted_ip_list = 10.0.209.77,10.0.211.180,[root@ceph-52-iscsifix-bcb6zp-node4 /]# cat /etc/ceph/iscsi-gateway.cfg 
# This file is generated by cephadm.
[config]
cluster_client_name = client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
pool = iscsipool
trusted_ip_list = 10.0.209.77,10.0.211.180,10.0.209.233
minimum_gateways = 1
api_port = ''
api_user = admin
api_password = admin
api_secure = False
log_to_stderr = True
log_to_stderr_prefix = debug
log_to_file = False
[root@ceph-52-iscsifix-bcb6zp-node4 /]#
minimum_gateways = 1
api_port = ''
api_user = admin
api_password = admin
api_secure = False
log_to_stderr = True
log_to_stderr_prefix = debug
log_to_file = False
[root@ceph-52-iscsifix-bcb6zp-node4 /]#

Comment 1 Preethi 2022-06-21 05:01:41 UTC
NOTE: Trusted_ip list in /etc/ceph/iscsigateway.cfg is not getting update and in order for the existing gateway nodes. and the newly added gateway nodes shows all ips are updated along with client node IP

below snippet.
[root@ceph-52-iscsifix-bcb6zp-node2 /]# cat /etc/ceph/iscsi-gateway.cfg 
# This file is generated by cephadm.
[config]
cluster_client_name = client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node2.tedchj
pool = iscsipool
trusted_ip_list = 10.0.209.77,10.0.211.180,10.0.210.106,10.0.210.60,10.0.209.233
minimum_gateways = 1
api_port = ''
api_user = admin
api_password = admin
api_secure = False
log_to_stderr = True
log_to_stderr_prefix = debug
log_to_file = False
[root@ceph-52-iscsifix-bcb6zp-node2 /]#

Comment 2 Xiubo Li 2022-06-21 05:17:14 UTC
When adding gateway the ceph-iscsi will check the iscsi-gateway.cfg by comparing the hash of them, I checked the nodes with Preethi and found that the trusted_ip_lists are not exactly the same in all the gateway nodes after expanding.

Comment 5 Preethi 2022-06-23 05:29:14 UTC
@Adam, we are not able to add the newly added or expanded iscsi gateways to the existing targets/gateway nodes as we see the trusted_ips in the /etc/ceph/iscsi-gatway.cfg is not updated and not in the order.


below snippet of newly added node - We see the IPs in targeted_ip_list all 4 Ips are present not sure why client node IP is also added here
[root@ceph-52-iscsifix-bcb6zp-node2 /]# cat /etc/ceph/iscsi-gateway.cfg 
# This file is generated by cephadm.
[config]
cluster_client_name = client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node2.tedchj
pool = iscsipool
trusted_ip_list = 10.0.209.77,10.0.211.180,10.0.210.106,10.0.210.60,10.0.209.233
minimum_gateways = 1
api_port = ''
api_user = admin
api_password = admin
api_secure = False
log_to_stderr = True
log_to_stderr_prefix = debug
log_to_file = False
[root@ceph-52-iscsifix-bcb6zp-node2 /]# 

Above config is the snippet of the newly added nodes. However the existing ones gateway nodes are also not updated with the new one ips when we have the deployment successful for adding more gateway nodes.


Snippet of node4: --> Here we do not see the newly added IPs
node 4-
[root@ceph-52-iscsifix-bcb6zp-node4 /]# cat /etc/ceph/iscsi-gateway.cfg 
# This file is generated by cephadm.
[config]
cluster_client_name = client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
pool = iscsipool
trusted_ip_list = 10.0.209.77,10.0.211.180,[root@ceph-52-iscsifix-bcb6zp-node4 /]# cat /etc/ceph/iscsi-gateway.cfg 
# This file is generated by cephadm.
[config]
cluster_client_name = client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
pool = iscsipool
trusted_ip_list = 10.0.209.77,10.0.211.180,10.0.209.233
minimum_gateways = 1
api_port = ''
api_user = admin
api_password = admin
api_secure = False
log_to_stderr = True
log_to_stderr_prefix = debug
log_to_file = False
[root@ceph-52-iscsifix-bcb6zp-node4 /]#
minimum_gateways = 1
api_port = ''
api_user = admin
api_password = admin
api_secure = False
log_to_stderr = True
log_to_stderr_prefix = debug
log_to_file = False
[root@ceph-52-iscsifix-bcb6zp-node4 /]#

Comment 6 Adam King 2022-06-23 17:58:00 UTC
(In reply to Preethi from comment #5)
> @Adam, we are not able to add the newly added or expanded iscsi gateways to
> the existing targets/gateway nodes as we see the trusted_ips in the
> /etc/ceph/iscsi-gatway.cfg is not updated and not in the order.
> 
> 
> below snippet of newly added node - We see the IPs in targeted_ip_list all 4
> Ips are present not sure why client node IP is also added here
> [root@ceph-52-iscsifix-bcb6zp-node2 /]# cat /etc/ceph/iscsi-gateway.cfg 
> # This file is generated by cephadm.
> [config]
> cluster_client_name =
> client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node2.tedchj
> pool = iscsipool
> trusted_ip_list =
> 10.0.209.77,10.0.211.180,10.0.210.106,10.0.210.60,10.0.209.233
> minimum_gateways = 1
> api_port = ''
> api_user = admin
> api_password = admin
> api_secure = False
> log_to_stderr = True
> log_to_stderr_prefix = debug
> log_to_file = False
> [root@ceph-52-iscsifix-bcb6zp-node2 /]# 
> 
> Above config is the snippet of the newly added nodes. However the existing
> ones gateway nodes are also not updated with the new one ips when we have
> the deployment successful for adding more gateway nodes.
> 
> 
> Snippet of node4: --> Here we do not see the newly added IPs
> node 4-
> [root@ceph-52-iscsifix-bcb6zp-node4 /]# cat /etc/ceph/iscsi-gateway.cfg 
> # This file is generated by cephadm.
> [config]
> cluster_client_name =
> client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
> pool = iscsipool
> trusted_ip_list =
> 10.0.209.77,10.0.211.180,[root@ceph-52-iscsifix-bcb6zp-node4 /]# cat
> /etc/ceph/iscsi-gateway.cfg 
> # This file is generated by cephadm.
> [config]
> cluster_client_name =
> client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
> pool = iscsipool
> trusted_ip_list = 10.0.209.77,10.0.211.180,10.0.209.233
> minimum_gateways = 1
> api_port = ''
> api_user = admin
> api_password = admin
> api_secure = False
> log_to_stderr = True
> log_to_stderr_prefix = debug
> log_to_file = False
> [root@ceph-52-iscsifix-bcb6zp-node4 /]#
> minimum_gateways = 1
> api_port = ''
> api_user = admin
> api_password = admin
> api_secure = False
> log_to_stderr = True
> log_to_stderr_prefix = debug
> log_to_file = False
> [root@ceph-52-iscsifix-bcb6zp-node4 /]#

okay, so it seems like we just need to have the old iscsi daemons have their trusted ip list updated when we deploy more of them then? If you have cephadm redeploy the old iscsi daemons ("ceph orch daemon redeploy <daemon-name>" for each one) does everything then work as expected?

Comment 7 Xiubo Li 2022-06-23 23:47:34 UTC
Please note that except "cluster_client_name" and "logger_level" in the /etc/ceph/iscsi-gateway.cfg all the others must be exactly the same in all the ceph-iscsi gateway nodes.

Comment 8 Preethi 2022-06-24 03:14:41 UTC
(In reply to Preethi from comment #5)
> @Adam, we are not able to add the newly added or expanded iscsi gateways to
> the existing targets/gateway nodes as we see the trusted_ips in the
> /etc/ceph/iscsi-gatway.cfg is not updated and not in the order.
> 
> 
> below snippet of newly added node - We see the IPs in targeted_ip_list all 4
> Ips are present not sure why client node IP is also added here
> [root@ceph-52-iscsifix-bcb6zp-node2 /]# cat /etc/ceph/iscsi-gateway.cfg 
> # This file is generated by cephadm.
> [config]
> cluster_client_name =
> client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node2.tedchj
> pool = iscsipool
> trusted_ip_list =
> 10.0.209.77,10.0.211.180,10.0.210.106,10.0.210.60,10.0.209.233
> minimum_gateways = 1
> api_port = ''
> api_user = admin
> api_password = admin
> api_secure = False
> log_to_stderr = True
> log_to_stderr_prefix = debug
> log_to_file = False
> [root@ceph-52-iscsifix-bcb6zp-node2 /]# 
> 
> Above config is the snippet of the newly added nodes. However the existing
> ones gateway nodes are also not updated with the new one ips when we have
> the deployment successful for adding more gateway nodes.
> 
> 
> Snippet of node4: --> Here we do not see the newly added IPs
> node 4-
> [root@ceph-52-iscsifix-bcb6zp-node4 /]# cat /etc/ceph/iscsi-gateway.cfg 
> # This file is generated by cephadm.
> [config]
> cluster_client_name =
> client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
> pool = iscsipool
> trusted_ip_list =
> 10.0.209.77,10.0.211.180,[root@ceph-52-iscsifix-bcb6zp-node4 /]# cat
> /etc/ceph/iscsi-gateway.cfg 
> # This file is generated by cephadm.
> [config]
> cluster_client_name =
> client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
> pool = iscsipool
> trusted_ip_list = 10.0.209.77,10.0.211.180,10.0.209.233
> minimum_gateways = 1
> api_port = ''
> api_user = admin
> api_password = admin
> api_secure = False
> log_to_stderr = True
> log_to_stderr_prefix = debug
> log_to_file = False
> [root@ceph-52-iscsifix-bcb6zp-node4 /]#
> minimum_gateways = 1
> api_port = ''
> api_user = admin
> api_password = admin
> api_secure = False
> log_to_stderr = True
> log_to_stderr_prefix = debug
> log_to_file = False
> [root@ceph-52-iscsifix-bcb6zp-node4 /]#

(In reply to Adam King from comment #6)
> (In reply to Preethi from comment #5)
> > @Adam, we are not able to add the newly added or expanded iscsi gateways to
> > the existing targets/gateway nodes as we see the trusted_ips in the
> > /etc/ceph/iscsi-gatway.cfg is not updated and not in the order.
> > 
> > 
> > below snippet of newly added node - We see the IPs in targeted_ip_list all 4
> > Ips are present not sure why client node IP is also added here
> > [root@ceph-52-iscsifix-bcb6zp-node2 /]# cat /etc/ceph/iscsi-gateway.cfg 
> > # This file is generated by cephadm.
> > [config]
> > cluster_client_name =
> > client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node2.tedchj
> > pool = iscsipool
> > trusted_ip_list =
> > 10.0.209.77,10.0.211.180,10.0.210.106,10.0.210.60,10.0.209.233
> > minimum_gateways = 1
> > api_port = ''
> > api_user = admin
> > api_password = admin
> > api_secure = False
> > log_to_stderr = True
> > log_to_stderr_prefix = debug
> > log_to_file = False
> > [root@ceph-52-iscsifix-bcb6zp-node2 /]# 
> > 
> > Above config is the snippet of the newly added nodes. However the existing
> > ones gateway nodes are also not updated with the new one ips when we have
> > the deployment successful for adding more gateway nodes.
> > 
> > 
> > Snippet of node4: --> Here we do not see the newly added IPs
> > node 4-
> > [root@ceph-52-iscsifix-bcb6zp-node4 /]# cat /etc/ceph/iscsi-gateway.cfg 
> > # This file is generated by cephadm.
> > [config]
> > cluster_client_name =
> > client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
> > pool = iscsipool
> > trusted_ip_list =
> > 10.0.209.77,10.0.211.180,[root@ceph-52-iscsifix-bcb6zp-node4 /]# cat
> > /etc/ceph/iscsi-gateway.cfg 
> > # This file is generated by cephadm.
> > [config]
> > cluster_client_name =
> > client.iscsi.iscsipool.ceph-52-iscsifix-bcb6zp-node4.oynhxh
> > pool = iscsipool
> > trusted_ip_list = 10.0.209.77,10.0.211.180,10.0.209.233
> > minimum_gateways = 1
> > api_port = ''
> > api_user = admin
> > api_password = admin
> > api_secure = False
> > log_to_stderr = True
> > log_to_stderr_prefix = debug
> > log_to_file = False
> > [root@ceph-52-iscsifix-bcb6zp-node4 /]#
> > minimum_gateways = 1
> > api_port = ''
> > api_user = admin
> > api_password = admin
> > api_secure = False
> > log_to_stderr = True
> > log_to_stderr_prefix = debug
> > log_to_file = False
> > [root@ceph-52-iscsifix-bcb6zp-node4 /]#
> 
> okay, so it seems like we just need to have the old iscsi daemons have their
> trusted ip list updated when we deploy more of them then? If you have
> cephadm redeploy the old iscsi daemons ("ceph orch daemon redeploy
> <daemon-name>" for each one) does everything then work as expected?


----> Yes, We should have everything same except client name and log attributes. Trusted IPs should be seen for all nodes and should be present in order to work i guess

Comment 9 Adam King 2022-08-09 19:01:15 UTC
did a bit of testing and it does seem like if the trusted_ip_list is updated in the spec cephadm has no mechanism in place to update the config for iscsi daemons that are already down. In my testing, it seemed like redeploying the iscsi daemon would get it to now have the correct trusted_ip_list and doing so with the primary iscsi would allow adding the new iscsi as a gateway.

[root@vm-02 ~]# cat /var/lib/ceph/a9615a4a-180d-11ed-b777-5254009ebf8c/iscsi.foo.vm-02.vkzeez/iscsi-gateway.cfg 
# This file is generated by cephadm.
[config]
cluster_client_name = client.iscsi.foo.vm-02.vkzeez
pool = foo
trusted_ip_list = 192.168.122.219,192.168.122.71,192.168.122.58,192.168.122.219
minimum_gateways = 1
api_port = ''
api_user = u
api_password = p
api_secure = False
log_to_stderr = True
log_to_stderr_prefix = debug
log_to_file = False[root@vm-02 ~]# exit
logout
Connection to vm-02 closed.
[root@vm-00 ~]# cat /var/lib/ceph/a9615a4a-180d-11ed-b777-5254009ebf8c/iscsi.foo.vm-00.opevgn/iscsi-gateway.cfg 
# This file is generated by cephadm.
[config]
cluster_client_name = client.iscsi.foo.vm-00.opevgn
pool = foo
trusted_ip_list = 192.168.122.219
minimum_gateways = 1
api_port = ''
api_user = u
api_password = p
api_secure = False
log_to_stderr = True
log_to_stderr_prefix = debug
log_to_file = False[root@vm-00 ~]# 
[root@vm-00 ~]# 
[root@vm-00 ~]# podman exec -it bd2604909c9d /bin/bash
[root@vm-00 /]# gwcli
/iscsi-target...at:rh7-client> cd /iscsi-targets/  
/iscsi-targets> cd iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/
/iscsi-target...-gw:iscsi-igw> cd gateways/
/iscsi-target...-igw/gateways> create vm-02 192.168.122.58
Adding gateway, sync'ing 0 disk(s) and 1 client(s)
Failed : /etc/ceph/iscsi-gateway.cfg on vm-02 does not match the local version. Correct and retry request
/iscsi-target...-igw/gateways> exit
[root@vm-00 /]# exit
exit
[root@vm-00 ~]# cephadm shell
Inferring fsid a9615a4a-180d-11ed-b777-5254009ebf8c
Inferring config /var/lib/ceph/a9615a4a-180d-11ed-b777-5254009ebf8c/mon.vm-00/config
Using ceph image with id '9adf290b8156' and tag 'latest' created on 2022-08-09 16:39:07 +0000 UTC
quay.io/adk3798/ceph@sha256:e624d26b2617571a45d22fbd5865eb0987d7d789c2cbb28b818e1ea74890654e
[ceph: root@vm-00 /]# ceph orch ps --daemon-type iscsi
NAME                    HOST   PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
iscsi.foo.vm-00.opevgn  vm-00         running (13m)    34s ago  13m     126M        -  3.5      9adf290b8156  bd2604909c9d  
iscsi.foo.vm-01.sybpav  vm-01         running (13m)     2m ago  13m    79.0M        -  3.5      9adf290b8156  d601790d671d  
iscsi.foo.vm-02.vkzeez  vm-02         running (5m)      5m ago   5m    55.8M        -  3.5      9adf290b8156  6be881a15f43  
[ceph: root@vm-00 /]# ceph orch daemon redeploy iscsi.foo.vm-00.opevgn
Scheduled to redeploy iscsi.foo.vm-00.opevgn on host 'vm-00'
[ceph: root@vm-00 /]# exit
exit
[root@vm-00 ~]# cat /var/lib/ceph/a9615a4a-180d-11ed-b777-5254009ebf8c/iscsi.foo.vm-00.opevgn/iscsi-gateway.cfg 
# This file is generated by cephadm.
[config]
cluster_client_name = client.iscsi.foo.vm-00.opevgn
pool = foo
trusted_ip_list = 192.168.122.219,192.168.122.71,192.168.122.58,192.168.122.219
minimum_gateways = 1
api_port = ''
api_user = u
api_password = p
api_secure = False
log_to_stderr = True
log_to_stderr_prefix = debug
log_to_file = Fa
[root@vm-00 ~]# podman ps | grep iscsi
6a5fb5c4c769  quay.io/adk3798/ceph@sha256:e624d26b2617571a45d22fbd5865eb0987d7d789c2cbb28b818e1ea74890654e                        2 minutes ago   Up 2 minutes ago               ceph-a9615a4a-180d-11ed-b777-5254009ebf8c-iscsi-foo-vm-00-opevgn-tcmu
8054009fe832  quay.io/adk3798/ceph@sha256:e624d26b2617571a45d22fbd5865eb0987d7d789c2cbb28b818e1ea74890654e                        2 minutes ago   Up 2 minutes ago               ceph-a9615a4a-180d-11ed-b777-5254009ebf8c-iscsi-foo-vm-00-opevgn
[root@vm-00 ~]# podman exec -it 8054009fe832 /bin/bash
[root@vm-00 /]# gwcli
Warning: Could not load preferences file /root/.gwcli/prefs.bin.
/> cd iscsi-targets/
/iscsi-targets> cd iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/
/iscsi-target...-gw:iscsi-igw> cd gateways/
/iscsi-target...-igw/gateways> create vm-02 192.168.122.58
Adding gateway, sync'ing 0 disk(s) and 1 client(s)
ok
/iscsi-target...-igw/gateways> 


Does the redeploy also fix things in your setup @Preethi? If so, I guess cephadm just needs to force a reconfig of existing iscsi daemons when the trusted_ip_list changes.


Also, that one additional ip that is being added is the ip of the active mgr. It's being added so the dashboard can access.

Comment 10 Preethi 2022-08-10 04:15:42 UTC
ceph orch redeploy was not performed. Issue was seen and checked only with 2-4 iscsi gateways using ceph orch apply command where we saw the new trusted Ips not getting updated in the config file.
if redeploy works for this, could be considered as workaround for now.

Comment 11 Adam King 2022-08-24 12:58:16 UTC
https://github.com/ceph/ceph/pull/47521 should fix this but given the tight timeline for 5.3 and that it would still need to get through testing in main then testing in pacific before coming downstream and the existence of redeploying the iscsi service as a workaround I'm pushing this back to 5.3z1

Comment 25 Preethi 2023-02-16 15:17:08 UTC
This is working as expected- We are able to expand the gateways and add it to the existing gateways via gwcli. Hence, moving to verified state.
Below snippet and ceph version
/iscsi-target...-igw/gateways> create ceph-mastercard123-4nodes-j70h3f-node13 10.0.211.38
Adding gateway, sync'ing 10 disk(s) and 1 client(s)
ok
/iscsi-target...-igw/gateways> create ceph-mastercard123-4nodes-j70h3f-node14 10.0.210.255
Adding gateway, sync'ing 10 disk(s) and 1 client(s)
ok
/iscsi-target...-igw/gateways> ls
o- gateways .................................................................................................. [Up: 4/4, Portals: 4]
  o- ceph-mastercard123-4nodes-j70h3f-node10 ................................................................... [10.0.208.213 (UP)]
  o- ceph-mastercard123-4nodes-j70h3f-node11 ................................................................... [10.0.210.220 (UP)]
  o- ceph-mastercard123-4nodes-j70h3f-node13 .................................................................... [10.0.211.38 (UP)]
  o- ceph-mastercard123-4nodes-j70h3f-node14 ................................................................... [10.0.210.255 (UP)]
/iscsi-target...-igw/gateways> cd /disks
/disks>  create iscsi1 image=disk size=50g count=10
ok
/disks> ls
o- disks ........................................................................................................ [1500G, Disks: 20]
  o- iscsi1 ....................................................................................................... [iscsi1 (1500G)]
    o- disk1 ......................................................................................... [iscsi1/disk1 (Unknown, 50G)]
    o- disk2 ......................................................................................... [iscsi1/disk2 (Unknown, 50G)]
    o- disk3 ......................................................................................... [iscsi1/disk3 (Unknown, 50G)]
    o- disk4 ......................................................................................... [iscsi1/disk4 (Unknown, 50G)]
    o- disk5 ......................................................................................... [iscsi1/disk5 (Unknown, 50G)]
    o- disk6 ......................................................................................... [iscsi1/disk6 (Unknown, 50G)]
    o- disk7 ......................................................................................... [iscsi1/disk7 (Unknown, 50G)]
    o- disk8 ......................................................................................... [iscsi1/disk8 (Unknown, 50G)]
    o- disk9 ......................................................................................... [iscsi1/disk9 (Unknown, 50G)]
    o- disk10 ....................................................................................... [iscsi1/disk10 (Unknown, 50G)]
    o- test1 ......................................................................................... [iscsi1/test1 (Online, 100G)]
    o- test2 ......................................................................................... [iscsi1/test2 (Online, 100G)]
    o- test3 ......................................................................................... [iscsi1/test3 (Online, 100G)]
    o- test4 ......................................................................................... [iscsi1/test4 (Online, 100G)]
    o- test5 ......................................................................................... [iscsi1/test5 (Online, 100G)]
    o- test6 ......................................................................................... [iscsi1/test6 (Online, 100G)]
    o- test7 ......................................................................................... [iscsi1/test7 (Online, 100G)]
    o- test8 ......................................................................................... [iscsi1/test8 (Online, 100G)]
    o- test9 ......................................................................................... [iscsi1/test9 (Online, 100G)]
    o- test10 ....................................................................................... [iscsi1/test10 (Online, 100G)]
/disks> goto hosts
/iscsi-target...eph-igw/hosts> ls
o- hosts ............................................................................................. [Auth: ACL_ENABLED, Hosts: 1]
  o- iqn.1994-05.com.redhat:rh7-client .............................................................. [Auth: None, Disks: 10(1000G)]
    o- lun 0 .................................................. [iscsi1/test1(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
    o- lun 1 .................................................. [iscsi1/test2(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
    o- lun 2 .................................................. [iscsi1/test3(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
    o- lun 3 .................................................. [iscsi1/test4(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
    o- lun 4 .................................................. [iscsi1/test5(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
    o- lun 5 .................................................. [iscsi1/test6(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
    o- lun 6 .................................................. [iscsi1/test7(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
    o- lun 7 .................................................. [iscsi1/test8(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
    o- lun 8 .................................................. [iscsi1/test9(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
    o- lun 9 ................................................. [iscsi1/test10(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
/iscsi-target...eph-igw/hosts> cd iqn.1994-05.com.redhat:rh7-client/
/iscsi-target...at:rh7-client> ls
o- iqn.1994-05.com.redhat:rh7-client ................................................................ [Auth: None, Disks: 10(1000G)]
  o- lun 0 .................................................... [iscsi1/test1(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 1 .................................................... [iscsi1/test2(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
  o- lun 2 .................................................... [iscsi1/test3(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 3 .................................................... [iscsi1/test4(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
  o- lun 4 .................................................... [iscsi1/test5(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 5 .................................................... [iscsi1/test6(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
  o- lun 6 .................................................... [iscsi1/test7(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 7 .................................................... [iscsi1/test8(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
  o- lun 8 .................................................... [iscsi1/test9(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 9 ................................................... [iscsi1/test10(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
/iscsi-target...at:rh7-client> @hosts
/iscsi-target...eph-igw/hosts> disk add iscsi1/disk1
Command not found disk
/iscsi-target...eph-igw/hosts> cd iqn.1994-05.com.redhat:rh7-client/
/iscsi-target...at:rh7-client> disk add iscsi1/disk1
ok
/iscsi-target...at:rh7-client> disk add iscsi1/disk2
ok
/iscsi-target...at:rh7-client> disk add iscsi1/disk3
ok
/iscsi-target...at:rh7-client> disk add iscsi1/disk4
ok
/iscsi-target...at:rh7-client> disk add iscsi1/disk5
ok
/iscsi-target...at:rh7-client> disk add iscsi1/disk6
ok
/iscsi-target...at:rh7-client> disk add iscsi1/disk7
ok
/iscsi-target...at:rh7-client> disk add iscsi1/disk8
ok
/iscsi-target...at:rh7-client> disk add iscsi1/disk9
ok
/iscsi-target...at:rh7-client> disk add iscsi1/disk10
ok
/iscsi-target...at:rh7-client> ls
o- iqn.1994-05.com.redhat:rh7-client ................................................................ [Auth: None, Disks: 20(1500G)]
  o- lun 0 .................................................... [iscsi1/test1(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 1 .................................................... [iscsi1/test2(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
  o- lun 2 .................................................... [iscsi1/test3(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 3 .................................................... [iscsi1/test4(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
  o- lun 4 .................................................... [iscsi1/test5(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 5 .................................................... [iscsi1/test6(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
  o- lun 6 .................................................... [iscsi1/test7(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 7 .................................................... [iscsi1/test8(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
  o- lun 8 .................................................... [iscsi1/test9(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node10]
  o- lun 9 ................................................... [iscsi1/test10(100G), Owner: ceph-mastercard123-4nodes-j70h3f-node11]
  o- lun 10 .................................................... [iscsi1/disk1(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node13]
  o- lun 11 .................................................... [iscsi1/disk2(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node14]
  o- lun 12 .................................................... [iscsi1/disk3(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node13]
  o- lun 13 .................................................... [iscsi1/disk4(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node14]
  o- lun 14 .................................................... [iscsi1/disk5(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node13]
  o- lun 15 .................................................... [iscsi1/disk6(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node14]
  o- lun 16 .................................................... [iscsi1/disk7(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node13]
  o- lun 17 .................................................... [iscsi1/disk8(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node14]
  o- lun 18 .................................................... [iscsi1/disk9(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node13]
  o- lun 19 ................................................... [iscsi1/disk10(50G), Owner: ceph-mastercard123-4nodes-j70h3f-node14]
/iscsi-target...at:rh7-client>

Comment 27 errata-xmlrpc 2023-02-28 10:05:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0980


Note You need to log in before you can comment on or make changes to this bug.