Bug 1962492
| Summary: | [ISCSI] - iSCSI initiator does not show the disk created fdisk -l | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Harish Munjulur <hmunjulu> | ||||
| Component: | iSCSI | Assignee: | Xiubo Li <xiubli> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Gopi <gpatta> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 5.0 | CC: | ceph-eng-bugs, ceph-qe-bugs, gpatta, idryomov, mmurthy, pcuzner, tserlin, vereddy | ||||
| Target Milestone: | --- | Flags: | gpatta:
                needinfo+
                 | 
  ||||
| Target Release: | 5.0 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | ceph-16.2.0-69.el8cp | Doc Type: | If docs needed, set a value | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2021-08-30 08:30:53 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: | 
            
  | 
      ||||||
| 
 
        
          Comment 8
        
        
          Harish Munjulur
        
        
        
        
        
          2021-05-21 12:26:28 UTC
        
       
      
      
      
    Hi Xiubo Li, Thanks for sharing the steps to find the above required logs. 
1. firewalld status: It is disabled in both the gateway nodes
[root@magna031 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@magna032 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
2. tcmu-runner.log
on magna031 gateway node: 
sh-4.4# cat tcmu-runner.log 
2021-05-17 10:06:21.866 7:tcmu-runner [CRIT] main:1302: Starting...
2021-05-17 10:06:21.881 7:tcmu-runner [INFO] tcmur_register_handler:92: Handler rbd is registered
2021-05-17 10:06:21.956 7:tcmu-runner [INFO] tcmu_rbd_open:1162 rbd/rbd.disk_1: address: {10.8.128.31:0/3050073637}
2021-05-20 08:10:07.951 7:cmdproc-uio0 [INFO] alua_implicit_transition:581 rbd/rbd.disk_1: Starting read lock acquisition operation.
2021-05-20 08:10:07.952 7:ework-thread [INFO] tcmu_acquire_dev_lock:486 rbd/rbd.disk_1: Read lock acquisition successful
sh-4.4# cat tcmu-runner.log 
2021-05-17 10:06:56.026 7:tcmu-runner [CRIT] main:1302: Starting...
2021-05-17 10:06:56.041 7:tcmu-runner [INFO] tcmur_register_handler:92: Handler rbd is registered
2021-05-17 10:06:56.099 7:tcmu-runner [INFO] tcmu_rbd_open:1162 rbd/rbd.disk_1: address: {10.8.128.32:0/1572013376}
2021-05-19 06:50:27.534 7:cmdproc-uio0 [INFO] alua_implicit_transition:581 rbd/rbd.disk_1: Starting read lock acquisition operation.
2021-05-19 06:50:27.535 7:ework-thread [INFO] tcmu_acquire_dev_lock:486 rbd/rbd.disk_1: Read lock acquisition successful
3. rbd-target-gw logs - Folder is empty
sh-4.4# ls
ceph  dnf.librepo.log  dnf.log	dnf.rpm.log  ganesha  hawkey.log  maillog  messages  rbd-target-api  rbd-target-gw  rhsm  secure  spooler  tcmu-runner.log
sh-4.4# cd rbd-target-gw/
sh-4.4# ls  //Empty 
4. /etc/iscsi-gateway.cfg
I dont see the iscsi-gateway.cfg file in the container/node. 
5. I have attached the rbd-target-api logs
6. contents of /var/log 
sh-4.4# ls
ceph  dnf.librepo.log  dnf.log	dnf.rpm.log  ganesha  hawkey.log  maillog  messages  rbd-target-api  rbd-target-gw  rhsm  secure  spooler  tcmu-runner.log
    Created attachment 1786270 [details]
rbd-target-api logs
    Working as expected from QA with mixed ip's.
[ceph: root@magna007 ~]# cat iscsi.yaml 
service_type: iscsi
service_id: iscsi
placement:
  hosts:
  - magna007
  - magna010
spec:
  pool: iscsi_pool
  trusted_ip_list: "10.8.128.7,10.8.128.10,2620:52:0:880:225:90ff:fefc:2538,2620:52:0:880:225:90ff:fefc:252c"
  api_user: admin
  api_password: admin
  api_secure: false
/iscsi-targets> ls
o- iscsi-targets ................................................................................. [DiscoveryAuth: None, Targets: 1]
  o- iqn.2003-01.com.redhat.iscsi-gw:ceph-igw ............................................................ [Auth: None, Gateways: 2]
    o- disks ............................................................................................................ [Disks: 1]
    | o- rbd/disk_1 ...................................................................................... [Owner: magna007, Lun: 0]
    o- gateways .............................................................................................. [Up: 2/2, Portals: 2]
    | o- magna007 ................................................................................................ [10.8.128.7 (UP)]
    | o- magna010 ............................................................................................... [10.8.128.10 (UP)]
    o- host-groups .................................................................................................... [Groups : 0]
    o- hosts ......................................................................................... [Auth: ACL_ENABLED, Hosts: 1]
      o- iqn.1994-05.com.redhat:rh7-client .................................................. [LOGGED-IN, Auth: CHAP, Disks: 1(50G)]
        o- lun 0 ................................................................................ [rbd/disk_1(50G), Owner: magna007]
/iscsi-targets>
[root@magna108 ubuntu]# multipath -ll
.
.
.
3600140574bd1036b1d24cd6a13c11702 dm-6 LIO-ORG,TCMU device
size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| `- 7:0:0:0 sdf 8:80 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
  `- 6:0:0:0 sde 8:64 active ready running
[root@magna108 ubuntu]#
[root@magna108 ubuntu]# fdisk -l
.
.
.
Disk /dev/mapper/ceph--14b17026--899d--4499--84d7--470abe3546f5-osd--block--a5ab9580--7b19--49bc--8d36--d2dad7cf9872: 931.5 GiB, 1000203091968 bytes, 1953521664 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 524288 bytes
Disk /dev/sdf: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 524288 bytes
Disk /dev/mapper/3600140574bd1036b1d24cd6a13c11702: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 524288 bytes
[root@magna108 ubuntu]#
    Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294  |