Bug 1450824 - "gluster-block vol delete" dont work properly
Summary: "gluster-block vol delete" dont work properly
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tcmu-runner
Version: cns-3.5
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: RHGS 3.3.0
Assignee: Prasanna Kumar Kalever
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-05-15 08:55 UTC by Humble Chirammal
Modified: 2019-02-12 05:33 UTC (History)
7 users (show)

Fixed In Version: tcmu-runner-1.2.0-3.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-21 04:17:54 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:2773 0 normal SHIPPED_LIVE new packages: gluster-block 2017-09-21 08:16:22 UTC

Description Humble Chirammal 2017-05-15 08:55:43 UTC
Description of problem:

[root@localhost glusterblock]# gluster-block  list demo
Volume demo does not exist
[root@localhost glusterblock]# gluster-block create blockmaster1/demo ha 1 10.67.116.82 2GiB --json
{ "RESULT": "FAIL", "errCode": 17, "errMsg": "BLOCK with name: 'demo' already EXIST\n" }
[root@localhost glusterblock]# 

Version-Release number of selected component (if applicable):

[root@localhost glusterblock]# gluster-block version
gluster-block (0.2)
Repository rev: https://github.com/gluster/gluster-block.git
Copyright (c) 2016 Red Hat, Inc. <https://redhat.com/>
gluster-block comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@localhost glusterblock]# 


How reproducible:


[root@localhost glusterblock]# gluster-block  delete blockmaster1/demo
SUCCESSFUL ON:   10.67.116.82
RESULT: SUCCESS
[root@localhost glusterblock]# gluster-block  list blockmaster1
*Nil*
[root@localhost glusterblock]# gluster-block create blockmaster1/demo ha 1 10.67.116.82 2GiB --json
{ "RESULT": "FAIL", "errCode": 17, "errMsg": "BLOCK with name: 'demo' already EXIST\n" }
[root@localhost glusterblock]# 

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Humble Chirammal 2017-05-15 09:04:12 UTC
Check "Steps to reproduce" section.  In "Description", I made a mistake by specifying " gluster-block  list demo" instead of " gluster-block  list blockmaster1"

Comment 6 Prasanna Kumar Kalever 2017-05-21 10:30:35 UTC
Related patches:
https://github.com/open-iscsi/tcmu-runner/pull/158

Comment 8 surabhi 2017-06-19 09:31:54 UTC
Tried creating and deleting blocks for multiple times. After 3 or 4th attempt of delete everytime I am hitting the Vmcore issue. This bug can't be verified until that issue is fixed.

The BZ is : https://bugzilla.redhat.com/show_bug.cgi?id=1449245

Comment 9 Sweta Anandpara 2017-07-14 05:59:31 UTC
Tested and verified this on the build gluster-block-0.2.1-6 and glusterfs-3.8.4-33.

Deleting an already existing block succeeds. Trying to create a block of the same name soon afterwards, succeeds.

Moving this bug to verified in 3.3. Logs are pasted below.

[root@dhcp47-115 ~]# gluster-block list nash
nb1
nb2
nb3
nb4
nb5
nb6
nb7
nb8
nb9
nb10
nb11
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# gluster-block info nash/nb11
NAME: nb11
VOLUME: nash
GBID: 9b695f27-a4b5-4b21-ace1-6a6e031b0ea5
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S): 10.70.47.117 10.70.47.116 10.70.47.115
[root@dhcp47-115 ~]# gluster-block delete
Inadequate arguments for delete:
gluster-block delete <volname/blockname> [--json*]
[root@dhcp47-115 ~]# gluster-block delete nash/nb11
SUCCESSFUL ON:   10.70.47.117 10.70.47.116 10.70.47.115
RESULT: SUCCESS
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# gluster-block info nash/nb11
block nash/nb11 doesn't exist
[root@dhcp47-115 ~]# gluster-block list nash
nb1
nb2
nb3
nb4
nb5
nb6
nb7
nb8
nb9
nb10
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# gluster-block create nash/nb11 ha 3 auth enable 10.70.47.115,10.70.47.116,10.70.47.117 1G
IQN: iqn.2016-12.org.gluster-block:f737cef7-5869-499e-a5b2-25e72f07ebe8
USERNAME: f737cef7-5869-499e-a5b2-25e72f07ebe8
PASSWORD: 67f2d04c-2b4e-449a-91ac-2a415bb827ab
PORTAL(S):  10.70.47.115:3260 10.70.47.116:3260 10.70.47.117:3260
RESULT: SUCCESS
[root@dhcp47-115 ~]# gluster-block list nash
nb1
nb2
nb3
nb4
nb5
nb6
nb7
nb8
nb9
nb10
nb11
[root@dhcp47-115 ~]# gluster-block info nash/nb11
NAME: nb11
VOLUME: nash
GBID: f737cef7-5869-499e-a5b2-25e72f07ebe8
SIZE: 1073741824
HA: 3
PASSWORD: 67f2d04c-2b4e-449a-91ac-2a415bb827ab
BLOCK CONFIG NODE(S): 10.70.47.116 10.70.47.117 10.70.47.115
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# gluster-block create nash/nb12 ha 2 auth enable 10.70.47.116,10.70.47.117 1G
IQN: iqn.2016-12.org.gluster-block:bf2f31cb-38ef-46ae-9a84-756c02f21e70
USERNAME: bf2f31cb-38ef-46ae-9a84-756c02f21e70
PASSWORD: 0bb05662-1884-43b5-b09a-0da31dcb68eb
PORTAL(S):  10.70.47.116:3260 10.70.47.117:3260
RESULT: SUCCESS
[root@dhcp47-115 ~]# gluster-block info nash/nb12
NAME: nb12
VOLUME: nash
GBID: bf2f31cb-38ef-46ae-9a84-756c02f21e70
SIZE: 1073741824
HA: 2
PASSWORD: 0bb05662-1884-43b5-b09a-0da31dcb68eb
BLOCK CONFIG NODE(S): 10.70.47.116 10.70.47.117
[root@dhcp47-115 ~]#
[root@dhcp47-115 ~]# gluster-block create nash/nb11 ha 1 auth enable 10.70.47.116 1M
BLOCK with name: 'nb11' already EXIST

RESULT:FAIL
[root@dhcp47-115 ~]# gluster-block create nash/nb12 ha 1 auth enable 10.70.47.115 1M
BLOCK with name: 'nb12' already EXIST

RESULT:FAIL
[root@dhcp47-115 ~]# 

Environment:
------------

[root@dhcp47-115 ~]# gluster peer status
Number of Peers: 5

Hostname: dhcp47-121.lab.eng.blr.redhat.com
Uuid: 49610061-1788-4cbc-9205-0e59fe91d842
State: Peer in Cluster (Connected)
Other names:
10.70.47.121

Hostname: dhcp47-113.lab.eng.blr.redhat.com
Uuid: a0557927-4e5e-4ff7-8dce-94873f867707
State: Peer in Cluster (Connected)

Hostname: dhcp47-114.lab.eng.blr.redhat.com
Uuid: c0dac197-5a4d-4db7-b709-dbf8b8eb0896
State: Peer in Cluster (Connected)
Other names:
10.70.47.114

Hostname: dhcp47-116.lab.eng.blr.redhat.com
Uuid: a96e0244-b5ce-4518-895c-8eb453c71ded
State: Peer in Cluster (Connected)
Other names:
10.70.47.116

Hostname: dhcp47-117.lab.eng.blr.redhat.com
Uuid: 17eb3cef-17e7-4249-954b-fc19ec608304
State: Peer in Cluster (Connected)
Other names:
10.70.47.117
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# rpm -qa | grep gluster
glusterfs-cli-3.8.4-33.el7rhgs.x86_64
glusterfs-rdma-3.8.4-33.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-3.2.0-10.el7.x86_64
python-gluster-3.8.4-33.el7rhgs.noarch
vdsm-gluster-4.17.33-1.1.el7rhgs.noarch
glusterfs-client-xlators-3.8.4-33.el7rhgs.x86_64
glusterfs-fuse-3.8.4-33.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-events-3.8.4-33.el7rhgs.x86_64
gluster-block-0.2.1-6.el7rhgs.x86_64
gluster-nagios-addons-0.2.9-1.el7rhgs.x86_64
samba-vfs-glusterfs-4.6.3-3.el7rhgs.x86_64
glusterfs-3.8.4-33.el7rhgs.x86_64
glusterfs-debuginfo-3.8.4-26.el7rhgs.x86_64
glusterfs-api-3.8.4-33.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-33.el7rhgs.x86_64
glusterfs-libs-3.8.4-33.el7rhgs.x86_64
glusterfs-server-3.8.4-33.el7rhgs.x86_64
[root@dhcp47-115 ~]# 
[root@dhcp47-115 ~]# gluster v list
ctdb
gluster_shared_storage
nash
testvol
[root@dhcp47-115 ~]# gluster v info nsah
Volume nsah does not exist
[root@dhcp47-115 ~]# gluster v info nash
 
Volume Name: nash
Type: Replicate
Volume ID: f1ea3d3e-c536-4f36-b61f-cb9761b8a0a6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.115:/bricks/brick4/nash0
Brick2: 10.70.47.116:/bricks/brick4/nash1
Brick3: 10.70.47.117:/bricks/brick4/nash2
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.open-behind: off
performance.readdir-ahead: off
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
server.allow-insecure: on
cluster.brick-multiplex: disable
cluster.enable-shared-storage: enable
[root@dhcp47-115 ~]#

Comment 12 errata-xmlrpc 2017-09-21 04:17:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2773


Note You need to log in before you can comment on or make changes to this bug.