Bug 1470349 - create: for an HA >1 target portals are not created as expected
create: for an HA >1 target portals are not created as expected
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gluster-block (Show other bugs)
3.3
Unspecified Unspecified
unspecified Severity high
: ---
: RHGS 3.3.0
Assigned To: Prasanna Kumar Kalever
Sweta Anandpara
:
Depends On:
Blocks: 1417151
  Show dependency treegraph
 
Reported: 2017-07-12 15:18 EDT by Prasanna Kumar Kalever
Modified: 2017-09-21 00:20 EDT (History)
7 users (show)

See Also:
Fixed In Version: gluster-block-0.2.1-6.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-09-21 00:20:54 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Prasanna Kumar Kalever 2017-07-12 15:18:54 EDT
Description of problem:
On Create with HA 3

[root@localhost gluster-block]# gluster-block create sample/block ha 3 192.168.124.208,192.168.124.8,192.168.124.179 1GiB
IQN: iqn.2016-12.org.gluster-block:5727bed8-2079-4551-ad46-89dc12b98711
PORTAL(S):  192.168.124.179:3260
RESULT: SUCCESS

# targetcli ls
[...]
  | o- iqn.2016-12.org.gluster-block:5727bed8-2079-4551-ad46-89dc12b98711  [TPGs: 3]                                  
  |   o- tpg1 .................................. [disabled]
  |   | o- acls ................................. [ACLs: 0]
  |   | o- luns ................................. [LUNs: 0]
  |   | o- portals ........................... [Portals: 0]
  |   o- tpg2 .................................. [disabled]
  |   | o- acls ................................. [ACLs: 0]
  |   | o- luns ................................. [LUNs: 0]
  |   | o- portals ........................... [Portals: 0]
  |   o- tpg3 .................................. [disabled]
  |     o- acls ................................. [ACLs: 0]
  |     o- luns ................................. [LUNs: 1]
  |     | o- lun0 ............................ [user/block]
  |     o- portals ........................... [Portals: 1]
  |       o- 192.168.124.179:3260 .................... [OK]
  o- loopback ................................ [Targets: 0]
  o- vhost ................................... [Targets: 0]

notice only portal for 192.168.124.179 got created successfully, rest are not created.

Version-Release number of selected component (if applicable):
gluster-block-0.2.1-5.el7rhgs

How reproducible:
Fairly.

Steps to Reproduce:
1. Create gluster block with HA>1

Actual results:
Exported Portal for only last node in the list of HA nodes.

Expected results:
Export Portals for all HA number of nodes.
Comment 2 Prasanna Kumar Kalever 2017-07-12 15:21:26 EDT
Patch:
https://review.gluster.org/#/c/17761/
Comment 7 Sweta Anandpara 2017-07-14 01:56:01 EDT
Tested and verified this on the build gluster-block-0.2.1-6 and glusterfs-3.8.4-33.

Block create of ha 1,2 and 3 is working as expected. Logs are pasted below. Moving this bug to verified in 3.3.

[root@dhcp47-115 ~]# gluster-block create nash/nb11 ha 3 auth enable 10.70.47.115,10.70.47.116,10.70.47.117 1G
IQN: iqn.2016-12.org.gluster-block:f737cef7-5869-499e-a5b2-25e72f07ebe8
USERNAME: f737cef7-5869-499e-a5b2-25e72f07ebe8
PASSWORD: 67f2d04c-2b4e-449a-91ac-2a415bb827ab
PORTAL(S):  10.70.47.115:3260 10.70.47.116:3260 10.70.47.117:3260
RESULT: SUCCESS
[root@dhcp47-115 ~]# gluster-block list nash
nb1
nb2
nb3
nb4
nb5
nb6
nb7
nb8
nb9
nb10
nb11
[root@dhcp47-115 ~]# gluster-block info nash/nb11
NAME: nb11
VOLUME: nash
GBID: f737cef7-5869-499e-a5b2-25e72f07ebe8
SIZE: 1073741824
HA: 3
PASSWORD: 67f2d04c-2b4e-449a-91ac-2a415bb827ab
BLOCK CONFIG NODE(S): 10.70.47.116 10.70.47.117 10.70.47.115
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# gluster-block create nash/nb12 ha 2 auth enable 10.70.47.116,10.70.47.117 1G
IQN: iqn.2016-12.org.gluster-block:bf2f31cb-38ef-46ae-9a84-756c02f21e70
USERNAME: bf2f31cb-38ef-46ae-9a84-756c02f21e70
PASSWORD: 0bb05662-1884-43b5-b09a-0da31dcb68eb
PORTAL(S):  10.70.47.116:3260 10.70.47.117:3260
RESULT: SUCCESS
[root@dhcp47-115 ~]# gluster-block info nash/nb12
NAME: nb12
VOLUME: nash
GBID: bf2f31cb-38ef-46ae-9a84-756c02f21e70
SIZE: 1073741824
HA: 2
PASSWORD: 0bb05662-1884-43b5-b09a-0da31dcb68eb
BLOCK CONFIG NODE(S): 10.70.47.116 10.70.47.117
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# gluster-block create nash/nb13 ha 1 auth enable 10.70.47.115 1M
IQN: iqn.2016-12.org.gluster-block:dd44ab73-6802-4de4-b89b-9380947631da
USERNAME: dd44ab73-6802-4de4-b89b-9380947631da
PASSWORD: 3d299310-be27-4340-b431-0679d17fbfb0
PORTAL(S):  10.70.47.115:3260
RESULT: SUCCESS
[root@dhcp47-115 ~]# gluster-block info nash/nb13
NAME: nb13
VOLUME: nash
GBID: dd44ab73-6802-4de4-b89b-9380947631da
SIZE: 1048576
HA: 1
PASSWORD: 3d299310-be27-4340-b431-0679d17fbfb0
BLOCK CONFIG NODE(S): 10.70.47.115
[root@dhcp47-115 ~]# 


Environment:
------------

[root@dhcp47-115 ~]# gluster peer status
Number of Peers: 5

Hostname: dhcp47-121.lab.eng.blr.redhat.com
Uuid: 49610061-1788-4cbc-9205-0e59fe91d842
State: Peer in Cluster (Connected)
Other names:
10.70.47.121

Hostname: dhcp47-113.lab.eng.blr.redhat.com
Uuid: a0557927-4e5e-4ff7-8dce-94873f867707
State: Peer in Cluster (Connected)

Hostname: dhcp47-114.lab.eng.blr.redhat.com
Uuid: c0dac197-5a4d-4db7-b709-dbf8b8eb0896
State: Peer in Cluster (Connected)
Other names:
10.70.47.114

Hostname: dhcp47-116.lab.eng.blr.redhat.com
Uuid: a96e0244-b5ce-4518-895c-8eb453c71ded
State: Peer in Cluster (Connected)
Other names:
10.70.47.116

Hostname: dhcp47-117.lab.eng.blr.redhat.com
Uuid: 17eb3cef-17e7-4249-954b-fc19ec608304
State: Peer in Cluster (Connected)
Other names:
10.70.47.117
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# rpm -qa | grep gluster
glusterfs-cli-3.8.4-33.el7rhgs.x86_64
glusterfs-rdma-3.8.4-33.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-10.el7.x86_64
python-gluster-3.8.4-33.el7rhgs.noarch
vdsm-gluster-4.17.33-1.1.el7rhgs.noarch
glusterfs-client-xlators-3.8.4-33.el7rhgs.x86_64
glusterfs-fuse-3.8.4-33.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-events-3.8.4-33.el7rhgs.x86_64
gluster-block-0.2.1-6.el7rhgs.x86_64
gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64
samba-vfs-glusterfs-4.6.3-3.el7rhgs.x86_64
glusterfs-3.8.4-33.el7rhgs.x86_64
glusterfs-debuginfo-3.8.4-26.el7rhgs.x86_64
glusterfs-api-3.8.4-33.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-33.el7rhgs.x86_64
glusterfs-libs-3.8.4-33.el7rhgs.x86_64
glusterfs-server-3.8.4-33.el7rhgs.x86_64
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# gluster v list
ctdb
gluster_shared_storage
nash
testvol
[root@dhcp47-115 ~]# gluster v info nsah
Volume nsah does not exist
[root@dhcp47-115 ~]# gluster v info nash
 
Volume Name: nash
Type: Replicate
Volume ID: f1ea3d3e-c536-4f36-b61f-cb9761b8a0a6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.115:/bricks/brick4/nash0
Brick2: 10.70.47.116:/bricks/brick4/nash1
Brick3: 10.70.47.117:/bricks/brick4/nash2
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.open-behind: off
performance.readdir-ahead: off
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
server.allow-insecure: on
cluster.brick-multiplex: disable
cluster.enable-shared-storage: enable
[root@dhcp47-115 ~]#
Comment 9 errata-xmlrpc 2017-09-21 00:20:54 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2773

Note You need to log in before you can comment on or make changes to this bug.