Bug 1457981
Summary: | client fails to connect to the brick due to an incorrect port reported back by glusterd | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | mainline | CC: | bmekala, bugs, knarra, rgowdapp, rhs-bugs, rkavunga, storage-qa-internal, vbellur |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.12.0 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1451756 | Environment: | |
Last Closed: | 2017-09-05 17:32:34 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1451756 |
Description
Atin Mukherjee
2017-06-01 16:43:52 UTC
REVIEW: https://review.gluster.org/17447 (glusterd: fix brick start race) posted (#1) for review on master by Atin Mukherjee (amukherj) REVIEW: https://review.gluster.org/17447 (glusterd: fix brick start race) posted (#2) for review on master by Atin Mukherjee (amukherj) REVIEW: https://review.gluster.org/17447 (glusterd: fix brick start race) posted (#3) for review on master by Atin Mukherjee (amukherj) REVIEW: https://review.gluster.org/17447 (glusterd: fix brick start race) posted (#4) for review on master by Atin Mukherjee (amukherj) COMMIT: https://review.gluster.org/17447 committed in master by Jeff Darcy (jeff.us) ------ commit 7b58ec260152bdcf840ac622dbb883ce8b593f63 Author: Atin Mukherjee <amukherj> Date: Thu Jun 1 22:05:51 2017 +0530 glusterd: fix brick start race This commit tries to handle a race where we might end up trying to spawn the brick process twice with two different set of ports resulting into glusterd portmapper having the same brick entry in two different ports which will result into clients to fail connect to bricks because of incorrect ports been communicated back by glusterd. In glusterd_brick_start () checking brickinfo->status flag to identify whether a brick has been started by glusterd or not is not sufficient as there might be cases where while glusterd restarts glusterd_restart_bricks () will be called through glusterd_spawn_daemons () in synctask and immediately glusterd_do_volume_quorum_action () with server-side-quorum set to on will again try to start the brick and in case if the RPC_CLNT_CONNECT event for the same brick hasn't been processed by glusterd by that time, brickinfo->status will still be marked as GF_BRICK_STOPPED resulting into a reattempt to start the brick with a different port and that would result portmap go for a toss and resulting clients to fetch incorrect port. Fix would be to introduce another enum value called GF_BRICK_STARTING in brickinfo->status which will be set when a brick start is attempted by glusterd and will be set to started through RPC_CLNT_CONNECT event. For brick multiplexing, on attach brick request given the brickinfo->status flag is marked to started directly this value will not have any effect. Also this patch removes started_here flag as it looks to be redundant as brickinfo->status. Change-Id: I9dda1a9a531b67734a6e8c7619677867b520dcb2 BUG: 1457981 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: https://review.gluster.org/17447 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jeff.us> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report. glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html [2] https://www.gluster.org/pipermail/gluster-users/ |