Description of problem: ====================== glustershd fails to start on one of the nodes when we do a volume force start to bring a brick online. Version-Release number of selected component (if applicable): =========== mainline How reproducible: ================= 3/5 Steps to Reproduce: 1. create a brick mux setup 2. create about 30 1x3 volumes 3. start the volumes 4. pump IOs to the base volume and another volume(i created an extra ecvol for this) 5.now kill a brick say b1 6. use volume force start of any volume(some vol in higher ascending order say vol15 or vol20 ...and not the base volume) Actual results: ========= shd fails to start on one of the vols
REVIEW: https://review.gluster.org/19119 (glusterd: Nullify pmap entry for bricks belonging to same port) posted (#1) for review on master by Atin Mukherjee
COMMIT: https://review.gluster.org/19119 committed in master by \"Atin Mukherjee\" <amukherj> with a commit message- glusterd: Nullify pmap entry for bricks belonging to same port Commit 30e0b86 tried to address all the stale port issues glusterd had in case of a brick is abruptly killed. For brick multiplexing case because of a bug the portmap entry was not getting removed. This patch addresses the same. Change-Id: Ib020b967a9b92f1abae9cab9492f0cacec59aaa1 BUG: 1530281 Signed-off-by: Atin Mukherjee <amukherj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report. glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html [2] https://www.gluster.org/pipermail/gluster-users/