Bug 1454335 - gluster-block info doesn't show status of configured nodes when the node is down at the time of delete
Summary: gluster-block info doesn't show status of configured nodes when the node is d...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-block
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.3.0
Assignee: Prasanna Kumar Kalever
QA Contact: surabhi
URL:
Whiteboard:
Depends On:
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-05-22 13:37 UTC by Pranith Kumar K
Modified: 2017-09-21 04:19 UTC (History)
1 user (show)

Fixed In Version: gluster-block-0.2.1-1.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-21 04:19:33 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:2773 0 normal SHIPPED_LIVE new packages: gluster-block 2017-09-21 08:16:22 UTC

Description Pranith Kumar K 2017-05-22 13:37:00 UTC
Description of problem:

1) Create 5 block devices when all 3 nodes are up.
[root@localhost ~]# for i in {1..5}; do gluster-block create r3/$i ha 3 192.168.122.61,192.168.122.123,192.168.122.113 1GiB ; done
IQN: iqn.2016-12.org.gluster-block:7e59651d-7f6d-43a5-af30-d63e520a00da
PORTAL(S):  192.168.122.61:3260 192.168.122.123:3260 192.168.122.113:3260
RESULT: SUCCESS
IQN: iqn.2016-12.org.gluster-block:5d09001c-1961-49d1-8a39-e2c566779549
PORTAL(S):  192.168.122.61:3260 192.168.122.123:3260 192.168.122.113:3260
RESULT: SUCCESS
IQN: iqn.2016-12.org.gluster-block:35451e36-3485-40fd-94bc-5ec08ffee52c
PORTAL(S):  192.168.122.61:3260 192.168.122.123:3260 192.168.122.113:3260
RESULT: SUCCESS
IQN: iqn.2016-12.org.gluster-block:c05f3109-24d5-4d3e-a95a-015540db3632
PORTAL(S):  192.168.122.61:3260 192.168.122.123:3260 192.168.122.113:3260
RESULT: SUCCESS
IQN: iqn.2016-12.org.gluster-block:d8a385bd-c255-4bdb-968c-903f1cd325f3
PORTAL(S):  192.168.122.61:3260 192.168.122.123:3260 192.168.122.113:3260
RESULT: SUCCESS

2) Check that they are created successfully.

[root@localhost ~]# for i in {1..5}; do gluster-block info r3/$i; doneNAME: 1
VOLUME: r3
GBID: 7e59651d-7f6d-43a5-af30-d63e520a00da
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S): 192.168.122.61 192.168.122.123 192.168.122.113
NAME: 2
VOLUME: r3
GBID: 5d09001c-1961-49d1-8a39-e2c566779549
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S): 192.168.122.61 192.168.122.123 192.168.122.113
NAME: 3
VOLUME: r3
GBID: 35451e36-3485-40fd-94bc-5ec08ffee52c
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S): 192.168.122.61 192.168.122.123 192.168.122.113
NAME: 4
VOLUME: r3
GBID: c05f3109-24d5-4d3e-a95a-015540db3632
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S): 192.168.122.61 192.168.122.123 192.168.122.113
NAME: 5
VOLUME: r3
GBID: d8a385bd-c255-4bdb-968c-903f1cd325f3
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S): 192.168.122.61 192.168.122.123 192.168.122.113

3) Kill gluster-blockd on 192.168.122.61

4) Delete the all the blocks
[root@localhost ~]# for i in {1..5}; do gluster-block delete r3/$i; done
FAILED ON:   192.168.122.61
SUCCESSFUL ON:   192.168.122.123 192.168.122.113
RESULT: FAIL
FAILED ON:   192.168.122.61
SUCCESSFUL ON:   192.168.122.123 192.168.122.113
RESULT: FAIL
FAILED ON:   192.168.122.61
SUCCESSFUL ON:   192.168.122.123 192.168.122.113
RESULT: FAIL
FAILED ON:   192.168.122.61
SUCCESSFUL ON:   192.168.122.123 192.168.122.113
RESULT: FAIL
FAILED ON:   192.168.122.61
SUCCESSFUL ON:   192.168.122.123 192.168.122.113
RESULT: FAIL

5) gluster-block info doesn't show that 192.168.122.61 still has the configuration

[root@localhost ~]# for i in {1..5}; do gluster-block info r3/$i; done
NAME: 1
VOLUME: r3
GBID: 7e59651d-7f6d-43a5-af30-d63e520a00da
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S):
NAME: 2
VOLUME: r3
GBID: 5d09001c-1961-49d1-8a39-e2c566779549
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S):
NAME: 3
VOLUME: r3
GBID: 35451e36-3485-40fd-94bc-5ec08ffee52c
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S):
NAME: 4
VOLUME: r3
GBID: c05f3109-24d5-4d3e-a95a-015540db3632
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S):
NAME: 5
VOLUME: r3
GBID: d8a385bd-c255-4bdb-968c-903f1cd325f3
SIZE: 1073741824
HA: 3
PASSWORD: 
BLOCK CONFIG NODE(S):

6) If we actually delete it again, it gets deleted!
[root@localhost ~]# for i in {1..5}; do gluster-block delete r3/$i; doneSUCCESSFUL ON:   192.168.122.61
RESULT: SUCCESS
SUCCESSFUL ON:   192.168.122.61
RESULT: SUCCESS
SUCCESSFUL ON:   192.168.122.61
RESULT: SUCCESS
SUCCESSFUL ON:   192.168.122.61
RESULT: SUCCESS
SUCCESSFUL ON:   192.168.122.61
RESULT: SUCCESS


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 6 surabhi 2017-06-28 14:29:33 UTC
gluster-block info shows the config node on which the delete failed if gluster-blockd was not running.

gluster-block info blockstor/passed
NAME: passed
VOLUME: blockstor
GBID: f29901c1-fafb-40a7-a711-6d83cb782d4d
SIZE: 1073741824
HA: 2
PASSWORD: a768be80-23d6-42f3-a3f2-a33aa73c0383
BLOCK CONFIG NODE(S): 10.70.46.151 10.70.46.152
[root@dhcp46-151 yum.repos.d]# gluster-block delete blockstor/passed
FAILED ON:   10.70.46.152
SUCCESSFUL ON:   10.70.46.151
RESULT: FAIL
[root@dhcp46-151 yum.repos.d]# gluster-block info blockstor/passed
NAME: passed
VOLUME: blockstor
GBID: f29901c1-fafb-40a7-a711-6d83cb782d4d
SIZE: 1073741824
HA: 2
PASSWORD: a768be80-23d6-42f3-a3f2-a33aa73c0383
BLOCK CONFIG NODE(S): 10.70.46.152
[root@dhcp46-151 yum.repos.d]# gluster-block delete blockstor/passed
FAILED ON:   10.70.46.152
SUCCESSFUL ON: None
RESULT: FAIL
[root@dhcp46-151 yum.repos.d]# gluster-block info blockstor/passed
NAME: passed
VOLUME: blockstor
GBID: f29901c1-fafb-40a7-a711-6d83cb782d4d
SIZE: 1073741824
HA: 2
PASSWORD: a768be80-23d6-42f3-a3f2-a33aa73c0383
BLOCK CONFIG NODE(S): 10.70.46.152

Moving this bug to verified. I saw another issue with block delete if the gluster-blockd service is brought back up on the node where it was down will be raising another BZ.

Comment 8 errata-xmlrpc 2017-09-21 04:19:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2773


Note You need to log in before you can comment on or make changes to this bug.