Description of problem: With CNS, initiators are unable to access the target block devices, although initiator login seems to be successful. App pods as a result won't be able to access block volumes. These are the steps that were performed manually to check if initators are accessing the target block devices. 1) created a block volume via provisioner by submitting claim request 2) From one of the worker node, configured inititor, multipath and logged into the target and everything seems right till here. [root@dhcp46-248 ~]# iscsiadm -m discovery -t st -p 10.70.47.49 10.70.46.248:3260,1 iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd 10.70.47.49:3260,2 iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd 10.70.47.72:3260,3 iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd [root@dhcp46-248 ~]# iscsiadm -m node -T "iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd" -o update -n node.session.auth.authmethod -v CHAP -n node.session.auth.username -v d01fc50c-2293-452e-913f-46e3c829d8dd -n node.session.auth.password -v 9a3eef37-9cd9-49af-83bc-86efc3a0eda9 [root@dhcp46-248 ~]# iscsiadm -m node -T "iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd" -l Logging in to [iface: default, target: iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd, portal: 10.70.46.248,3260] (multiple) Logging in to [iface: default, target: iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd, portal: 10.70.47.49,3260] (multiple) Logging in to [iface: default, target: iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd, portal: 10.70.47.72,3260] (multiple) Login to [iface: default, target: iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd, portal: 10.70.46.248,3260] successful. Login to [iface: default, target: iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd, portal: 10.70.47.49,3260] successful. Login to [iface: default, target: iqn.2016-12.org.gluster-block:d01fc50c-2293-452e-913f-46e3c829d8dd, portal: 10.70.47.72,3260] successful. 3) lsblk however, fails to list the block device. lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 99G 0 part ├─rhel_dhcp47--104-root 253:0 0 50G 0 lvm / ├─rhel_dhcp47--104-swap 253:1 0 7.9G 0 lvm └─rhel_dhcp47--104-home 253:5 0 41.1G 0 lvm /home sdb 8:16 0 60G 0 disk └─vg_rhel_dhcp47----104--var-lv_var 253:4 0 59G 0 lvm /var sdc 8:32 0 50G 0 disk └─sdc1 8:33 0 50G 0 part ├─docker--vg-docker--pool_tmeta 253:2 0 52M 0 lvm │ └─docker--vg-docker--pool 253:10 0 19.9G 0 lvm │ ├─docker-253:4-100663485-d6437f7bf09e2e4730868451a43080c3fcc6700bafefc7633fa373e494858e4f 253:6 0 10G 0 dm │ ├─docker-253:4-100663485-d2a5c31d4f893f81428c3614b85eaf766c2b8898df2dd03ef43c2e22afc37877 253:7 0 10G 0 dm │ ├─docker-253:4-100663485-5f608d025f44975ba840f5c0fcbd0ccbd058d9eee1e10af37eb026aa98b9c633 253:13 0 10G 0 dm │ ├─docker-253:4-100663485-a1dc0c648517f2eca0212cd20de7d40c7ff3f6145810a3d28b9d32a2296b7680 253:15 0 10G 0 dm │ ├─docker-253:4-100663485-997dba6c715c7c8059c94d216c66555a421d18ce988ca187b75831bec711f80e 253:16 0 10G 0 dm │ └─docker-253:4-100663485-535673e098947296b3f30a1ccc3cadd7cf86b2aeb6a129a1944cb52702db49f6 253:17 0 10G 0 dm └─docker--vg-docker--pool_tdata 253:3 0 19.9G 0 lvm └─docker--vg-docker--pool 253:10 0 19.9G 0 lvm ├─docker-253:4-100663485-d6437f7bf09e2e4730868451a43080c3fcc6700bafefc7633fa373e494858e4f 253:6 0 10G 0 dm ├─docker-253:4-100663485-d2a5c31d4f893f81428c3614b85eaf766c2b8898df2dd03ef43c2e22afc37877 253:7 0 10G 0 dm ├─docker-253:4-100663485-5f608d025f44975ba840f5c0fcbd0ccbd058d9eee1e10af37eb026aa98b9c633 253:13 0 10G 0 dm ├─docker-253:4-100663485-a1dc0c648517f2eca0212cd20de7d40c7ff3f6145810a3d28b9d32a2296b7680 253:15 0 10G 0 dm ├─docker-253:4-100663485-997dba6c715c7c8059c94d216c66555a421d18ce988ca187b75831bec711f80e 253:16 0 10G 0 dm └─docker-253:4-100663485-535673e098947296b3f30a1ccc3cadd7cf86b2aeb6a129a1944cb52702db49f6 253:17 0 10G 0 dm sdd 8:48 0 100G 0 disk sde 8:64 0 100G 0 disk sdf 8:80 0 100G 0 disk sdg 8:96 0 512G 0 disk ├─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_9e00cbdbaed884b3996bc74862b6a7c5_tmeta 253:8 0 12M 0 lvm │ └─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_9e00cbdbaed884b3996bc74862b6a7c5-tpool 253:11 0 2G 0 lvm │ ├─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_9e00cbdbaed884b3996bc74862b6a7c5 253:12 0 2G 0 lvm │ └─vg_09d5f75d0b02b27a835469f4f6c631b5-brick_9e00cbdbaed884b3996bc74862b6a7c5 253:14 0 2G 0 lvm ├─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_9e00cbdbaed884b3996bc74862b6a7c5_tdata 253:9 0 2G 0 lvm │ └─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_9e00cbdbaed884b3996bc74862b6a7c5-tpool 253:11 0 2G 0 lvm │ ├─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_9e00cbdbaed884b3996bc74862b6a7c5 253:12 0 2G 0 lvm │ └─vg_09d5f75d0b02b27a835469f4f6c631b5-brick_9e00cbdbaed884b3996bc74862b6a7c5 253:14 0 2G 0 lvm ├─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_dd837542465861b00ee4411d164fe36a_tmeta 253:18 0 2.5G 0 lvm │ └─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_dd837542465861b00ee4411d164fe36a-tpool 253:20 0 500G 0 lvm │ ├─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_dd837542465861b00ee4411d164fe36a 253:21 0 500G 0 lvm │ └─vg_09d5f75d0b02b27a835469f4f6c631b5-brick_dd837542465861b00ee4411d164fe36a 253:22 0 500G 0 lvm └─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_dd837542465861b00ee4411d164fe36a_tdata 253:19 0 500G 0 lvm └─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_dd837542465861b00ee4411d164fe36a-tpool 253:20 0 500G 0 lvm ├─vg_09d5f75d0b02b27a835469f4f6c631b5-tp_dd837542465861b00ee4411d164fe36a 253:21 0 500G 0 lvm └─vg_09d5f75d0b02b27a835469f4f6c631b5-brick_dd837542465861b00ee4411d164fe36a 253:22 0 500G 0 lvm sr0 11:0 1 1024M 0 rom Version-Release number of selected component (if applicable): Images used: rhgs3/rhgs-volmanager-rhel7:3.3.0-6 rhgs3/rhgs-server-rhel7:3.3.0-6 heketi build: heketi-client-5.0.0-4.el7rhgs.x86_64 cns-deploy-5.0.0-6.el7rhgs.x86_64 gluster build: rpm -qa | grep 'gluster' glusterfs-server-3.8.4-32.el7rhgs.x86_64 gluster-block-0.2.1-4.el7rhgs.x86_64 glusterfs-libs-3.8.4-32.el7rhgs.x86_64 glusterfs-3.8.4-32.el7rhgs.x86_64 glusterfs-api-3.8.4-32.el7rhgs.x86_64 glusterfs-cli-3.8.4-32.el7rhgs.x86_64 glusterfs-fuse-3.8.4-32.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-32.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-32.el7rhgs.x86_64 How reproducible: Always Logs shall be attached shortly.
(In reply to Prasanna Kumar Kalever from comment #11) > The problem seem to be with the block device size being very minimal (less > than 1 sector/512 bytes) > > Currently the default unit for gluster-block is bytes, so if you can mention > the units there like MB or GB, IMO you should not hit this. Confirmed this behavior. block device of size 512 byte gets detected while 511 bytes doesn't.
Verified in cns-deploy-5.0.0-15.el7rhgs.x86_64 Volume provisioned is of correct size and IOs are running.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2879