Bug 1962492 - [ISCSI] - iSCSI initiator does not show the disk created fdisk -l
Summary: [ISCSI] - iSCSI initiator does not show the disk created fdisk -l
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: iSCSI
Version: 5.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Xiubo Li
QA Contact: Gopi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-20 08:12 UTC by Harish Munjulur
Modified: 2021-08-30 08:31 UTC (History)
8 users (show)

Fixed In Version: ceph-16.2.0-69.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:30:53 UTC
Embargoed:
gpatta: needinfo+


Attachments (Terms of Use)
rbd-target-api logs (70.11 KB, application/gzip)
2021-05-24 05:53 UTC, Harish Munjulur
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1025 0 None None None 2021-08-26 17:31:48 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:31:03 UTC

Comment 8 Harish Munjulur 2021-05-21 12:26:28 UTC
I have added the /var/log/message in the link: https://drive.google.com/drive/folders/1J1Z1VDIj9HfZ7A9DTo5VU2ZpV-_zdYvZ?usp=sharing


tcmu-runner log files: 
[root@magna032 log]# podman logs 69273f266cd9
log file path now is '/var/log/tcmu-runner.log'

But no such file exits in the path. 


I deployed as follows:(old deployment)

1. I created an iscsi pool and enabled the rbd application
2. I labelled a couple of nodes with iscsi, then 
3. [ceph: root@maint-1 /]# ceph orch apply iscsi --pool iscsi --api_user admin --api_password admin --placement="label:iscsi"
Scheduled iscsi.iscsi update...


I have not used iscsi.yaml config here. But even after using the iscsi.yaml config with trusted_ip_list (on a different cluster seeing the same issue)

Comment 10 Harish Munjulur 2021-05-24 05:51:09 UTC
Hi Xiubo Li, Thanks for sharing the steps to find the above required logs. 

1. firewalld status: It is disabled in both the gateway nodes

[root@magna031 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@magna032 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

2. tcmu-runner.log

on magna031 gateway node: 

sh-4.4# cat tcmu-runner.log 
2021-05-17 10:06:21.866 7:tcmu-runner [CRIT] main:1302: Starting...
2021-05-17 10:06:21.881 7:tcmu-runner [INFO] tcmur_register_handler:92: Handler rbd is registered
2021-05-17 10:06:21.956 7:tcmu-runner [INFO] tcmu_rbd_open:1162 rbd/rbd.disk_1: address: {10.8.128.31:0/3050073637}
2021-05-20 08:10:07.951 7:cmdproc-uio0 [INFO] alua_implicit_transition:581 rbd/rbd.disk_1: Starting read lock acquisition operation.
2021-05-20 08:10:07.952 7:ework-thread [INFO] tcmu_acquire_dev_lock:486 rbd/rbd.disk_1: Read lock acquisition successful


sh-4.4# cat tcmu-runner.log 
2021-05-17 10:06:56.026 7:tcmu-runner [CRIT] main:1302: Starting...
2021-05-17 10:06:56.041 7:tcmu-runner [INFO] tcmur_register_handler:92: Handler rbd is registered
2021-05-17 10:06:56.099 7:tcmu-runner [INFO] tcmu_rbd_open:1162 rbd/rbd.disk_1: address: {10.8.128.32:0/1572013376}
2021-05-19 06:50:27.534 7:cmdproc-uio0 [INFO] alua_implicit_transition:581 rbd/rbd.disk_1: Starting read lock acquisition operation.
2021-05-19 06:50:27.535 7:ework-thread [INFO] tcmu_acquire_dev_lock:486 rbd/rbd.disk_1: Read lock acquisition successful

3. rbd-target-gw logs - Folder is empty

sh-4.4# ls
ceph  dnf.librepo.log  dnf.log	dnf.rpm.log  ganesha  hawkey.log  maillog  messages  rbd-target-api  rbd-target-gw  rhsm  secure  spooler  tcmu-runner.log
sh-4.4# cd rbd-target-gw/
sh-4.4# ls  //Empty 


4. /etc/iscsi-gateway.cfg

I dont see the iscsi-gateway.cfg file in the container/node. 

5. I have attached the rbd-target-api logs

6. contents of /var/log 
sh-4.4# ls
ceph  dnf.librepo.log  dnf.log	dnf.rpm.log  ganesha  hawkey.log  maillog  messages  rbd-target-api  rbd-target-gw  rhsm  secure  spooler  tcmu-runner.log

Comment 11 Harish Munjulur 2021-05-24 05:53:59 UTC
Created attachment 1786270 [details]
rbd-target-api logs

Comment 17 Gopi 2021-06-11 11:45:30 UTC
Working as expected from QA with mixed ip's.

[ceph: root@magna007 ~]# cat iscsi.yaml 
service_type: iscsi
service_id: iscsi
placement:
  hosts:
  - magna007
  - magna010
spec:
  pool: iscsi_pool
  trusted_ip_list: "10.8.128.7,10.8.128.10,2620:52:0:880:225:90ff:fefc:2538,2620:52:0:880:225:90ff:fefc:252c"
  api_user: admin
  api_password: admin
  api_secure: false

/iscsi-targets> ls
o- iscsi-targets ................................................................................. [DiscoveryAuth: None, Targets: 1]
  o- iqn.2003-01.com.redhat.iscsi-gw:ceph-igw ............................................................ [Auth: None, Gateways: 2]
    o- disks ............................................................................................................ [Disks: 1]
    | o- rbd/disk_1 ...................................................................................... [Owner: magna007, Lun: 0]
    o- gateways .............................................................................................. [Up: 2/2, Portals: 2]
    | o- magna007 ................................................................................................ [10.8.128.7 (UP)]
    | o- magna010 ............................................................................................... [10.8.128.10 (UP)]
    o- host-groups .................................................................................................... [Groups : 0]
    o- hosts ......................................................................................... [Auth: ACL_ENABLED, Hosts: 1]
      o- iqn.1994-05.com.redhat:rh7-client .................................................. [LOGGED-IN, Auth: CHAP, Disks: 1(50G)]
        o- lun 0 ................................................................................ [rbd/disk_1(50G), Owner: magna007]
/iscsi-targets>


[root@magna108 ubuntu]# multipath -ll
.
.
.
3600140574bd1036b1d24cd6a13c11702 dm-6 LIO-ORG,TCMU device
size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| `- 7:0:0:0 sdf 8:80 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
  `- 6:0:0:0 sde 8:64 active ready running
[root@magna108 ubuntu]#

[root@magna108 ubuntu]# fdisk -l
.
.
.
Disk /dev/mapper/ceph--14b17026--899d--4499--84d7--470abe3546f5-osd--block--a5ab9580--7b19--49bc--8d36--d2dad7cf9872: 931.5 GiB, 1000203091968 bytes, 1953521664 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sde: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 524288 bytes


Disk /dev/sdf: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 524288 bytes


Disk /dev/mapper/3600140574bd1036b1d24cd6a13c11702: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 524288 bytes
[root@magna108 ubuntu]#

Comment 20 errata-xmlrpc 2021-08-30 08:30:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.