Description of problem: On Create with HA 3 [root@localhost gluster-block]# gluster-block create sample/block ha 3 192.168.124.208,192.168.124.8,192.168.124.179 1GiB IQN: iqn.2016-12.org.gluster-block:5727bed8-2079-4551-ad46-89dc12b98711 PORTAL(S): 192.168.124.179:3260 RESULT: SUCCESS # targetcli ls [...] | o- iqn.2016-12.org.gluster-block:5727bed8-2079-4551-ad46-89dc12b98711 [TPGs: 3] | o- tpg1 .................................. [disabled] | | o- acls ................................. [ACLs: 0] | | o- luns ................................. [LUNs: 0] | | o- portals ........................... [Portals: 0] | o- tpg2 .................................. [disabled] | | o- acls ................................. [ACLs: 0] | | o- luns ................................. [LUNs: 0] | | o- portals ........................... [Portals: 0] | o- tpg3 .................................. [disabled] | o- acls ................................. [ACLs: 0] | o- luns ................................. [LUNs: 1] | | o- lun0 ............................ [user/block] | o- portals ........................... [Portals: 1] | o- 192.168.124.179:3260 .................... [OK] o- loopback ................................ [Targets: 0] o- vhost ................................... [Targets: 0] notice only portal for 192.168.124.179 got created successfully, rest are not created. Version-Release number of selected component (if applicable): gluster-block-0.2.1-5.el7rhgs How reproducible: Fairly. Steps to Reproduce: 1. Create gluster block with HA>1 Actual results: Exported Portal for only last node in the list of HA nodes. Expected results: Export Portals for all HA number of nodes.
Patch: https://review.gluster.org/#/c/17761/
Tested and verified this on the build gluster-block-0.2.1-6 and glusterfs-3.8.4-33. Block create of ha 1,2 and 3 is working as expected. Logs are pasted below. Moving this bug to verified in 3.3. [root@dhcp47-115 ~]# gluster-block create nash/nb11 ha 3 auth enable 10.70.47.115,10.70.47.116,10.70.47.117 1G IQN: iqn.2016-12.org.gluster-block:f737cef7-5869-499e-a5b2-25e72f07ebe8 USERNAME: f737cef7-5869-499e-a5b2-25e72f07ebe8 PASSWORD: 67f2d04c-2b4e-449a-91ac-2a415bb827ab PORTAL(S): 10.70.47.115:3260 10.70.47.116:3260 10.70.47.117:3260 RESULT: SUCCESS [root@dhcp47-115 ~]# gluster-block list nash nb1 nb2 nb3 nb4 nb5 nb6 nb7 nb8 nb9 nb10 nb11 [root@dhcp47-115 ~]# gluster-block info nash/nb11 NAME: nb11 VOLUME: nash GBID: f737cef7-5869-499e-a5b2-25e72f07ebe8 SIZE: 1073741824 HA: 3 PASSWORD: 67f2d04c-2b4e-449a-91ac-2a415bb827ab BLOCK CONFIG NODE(S): 10.70.47.116 10.70.47.117 10.70.47.115 [root@dhcp47-115 ~]# [root@dhcp47-115 ~]# [root@dhcp47-115 ~]# gluster-block create nash/nb12 ha 2 auth enable 10.70.47.116,10.70.47.117 1G IQN: iqn.2016-12.org.gluster-block:bf2f31cb-38ef-46ae-9a84-756c02f21e70 USERNAME: bf2f31cb-38ef-46ae-9a84-756c02f21e70 PASSWORD: 0bb05662-1884-43b5-b09a-0da31dcb68eb PORTAL(S): 10.70.47.116:3260 10.70.47.117:3260 RESULT: SUCCESS [root@dhcp47-115 ~]# gluster-block info nash/nb12 NAME: nb12 VOLUME: nash GBID: bf2f31cb-38ef-46ae-9a84-756c02f21e70 SIZE: 1073741824 HA: 2 PASSWORD: 0bb05662-1884-43b5-b09a-0da31dcb68eb BLOCK CONFIG NODE(S): 10.70.47.116 10.70.47.117 [root@dhcp47-115 ~]# [root@dhcp47-115 ~]# gluster-block create nash/nb13 ha 1 auth enable 10.70.47.115 1M IQN: iqn.2016-12.org.gluster-block:dd44ab73-6802-4de4-b89b-9380947631da USERNAME: dd44ab73-6802-4de4-b89b-9380947631da PASSWORD: 3d299310-be27-4340-b431-0679d17fbfb0 PORTAL(S): 10.70.47.115:3260 RESULT: SUCCESS [root@dhcp47-115 ~]# gluster-block info nash/nb13 NAME: nb13 VOLUME: nash GBID: dd44ab73-6802-4de4-b89b-9380947631da SIZE: 1048576 HA: 1 PASSWORD: 3d299310-be27-4340-b431-0679d17fbfb0 BLOCK CONFIG NODE(S): 10.70.47.115 [root@dhcp47-115 ~]# Environment: ------------ [root@dhcp47-115 ~]# gluster peer status Number of Peers: 5 Hostname: dhcp47-121.lab.eng.blr.redhat.com Uuid: 49610061-1788-4cbc-9205-0e59fe91d842 State: Peer in Cluster (Connected) Other names: 10.70.47.121 Hostname: dhcp47-113.lab.eng.blr.redhat.com Uuid: a0557927-4e5e-4ff7-8dce-94873f867707 State: Peer in Cluster (Connected) Hostname: dhcp47-114.lab.eng.blr.redhat.com Uuid: c0dac197-5a4d-4db7-b709-dbf8b8eb0896 State: Peer in Cluster (Connected) Other names: 10.70.47.114 Hostname: dhcp47-116.lab.eng.blr.redhat.com Uuid: a96e0244-b5ce-4518-895c-8eb453c71ded State: Peer in Cluster (Connected) Other names: 10.70.47.116 Hostname: dhcp47-117.lab.eng.blr.redhat.com Uuid: 17eb3cef-17e7-4249-954b-fc19ec608304 State: Peer in Cluster (Connected) Other names: 10.70.47.117 [root@dhcp47-115 ~]# [root@dhcp47-115 ~]# rpm -qa | grep gluster glusterfs-cli-3.8.4-33.el7rhgs.x86_64 glusterfs-rdma-3.8.4-33.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-3.2.0-10.el7.x86_64 python-gluster-3.8.4-33.el7rhgs.noarch vdsm-gluster-4.17.33-1.1.el7rhgs.noarch glusterfs-client-xlators-3.8.4-33.el7rhgs.x86_64 glusterfs-fuse-3.8.4-33.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-events-3.8.4-33.el7rhgs.x86_64 gluster-block-0.2.1-6.el7rhgs.x86_64 gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64 samba-vfs-glusterfs-4.6.3-3.el7rhgs.x86_64 glusterfs-3.8.4-33.el7rhgs.x86_64 glusterfs-debuginfo-3.8.4-26.el7rhgs.x86_64 glusterfs-api-3.8.4-33.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-33.el7rhgs.x86_64 glusterfs-libs-3.8.4-33.el7rhgs.x86_64 glusterfs-server-3.8.4-33.el7rhgs.x86_64 [root@dhcp47-115 ~]# [root@dhcp47-115 ~]# gluster v list ctdb gluster_shared_storage nash testvol [root@dhcp47-115 ~]# gluster v info nsah Volume nsah does not exist [root@dhcp47-115 ~]# gluster v info nash Volume Name: nash Type: Replicate Volume ID: f1ea3d3e-c536-4f36-b61f-cb9761b8a0a6 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.47.115:/bricks/brick4/nash0 Brick2: 10.70.47.116:/bricks/brick4/nash1 Brick3: 10.70.47.117:/bricks/brick4/nash2 Options Reconfigured: nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.open-behind: off performance.readdir-ahead: off network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on cluster.brick-multiplex: disable cluster.enable-shared-storage: enable [root@dhcp47-115 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2773