Bug 1530281 - glustershd fails to start on a volume force start after a brick is down
Summary: glustershd fails to start on a volume force start after a brick is down
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Atin Mukherjee
QA Contact:
URL:
Whiteboard: brick-multiplexing
Depends On: 1530217 1530325
Blocks: 1530448 1530449 1530450
TreeView+ depends on / blocked
 
Reported: 2018-01-02 12:44 UTC by Atin Mukherjee
Modified: 2018-03-15 11:24 UTC (History)
7 users (show)

Fixed In Version: glusterfs-4.0.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1530217
: 1530448 (view as bug list)
Environment:
Last Closed: 2018-03-15 11:24:53 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Atin Mukherjee 2018-01-02 14:58:21 UTC
Description of problem:
======================
glustershd fails to start on one of the nodes when we do a volume force start to bring a brick online.

Version-Release number of selected component (if applicable):
===========
mainline

How reproducible:
=================
3/5

Steps to Reproduce:
1. create a brick mux setup
2. create about 30 1x3 volumes
3. start the volumes
4. pump IOs to the base volume and another volume(i created an extra ecvol for this)
5.now kill a brick say b1
6. use volume force start of any volume(some vol in higher ascending order say vol15 or vol20 ...and not the base volume)



Actual results:
=========
shd fails to start on one of the vols

Comment 2 Worker Ant 2018-01-02 14:59:25 UTC
REVIEW: https://review.gluster.org/19119 (glusterd: Nullify pmap entry for bricks belonging to same port) posted (#1) for review on master by Atin Mukherjee

Comment 3 Worker Ant 2018-01-03 01:23:23 UTC
COMMIT: https://review.gluster.org/19119 committed in master by \"Atin Mukherjee\" <amukherj> with a commit message- glusterd: Nullify pmap entry for bricks belonging to same port

Commit 30e0b86 tried to address all the stale port issues glusterd had
in case of a brick is abruptly killed. For brick multiplexing case
because of a bug the portmap entry was not getting removed. This patch
addresses the same.

Change-Id: Ib020b967a9b92f1abae9cab9492f0cacec59aaa1
BUG: 1530281
Signed-off-by: Atin Mukherjee <amukherj>

Comment 4 Shyamsundar 2018-03-15 11:24:53 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report.

glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.