Bug 1530281
Summary: | glustershd fails to start on a volume force start after a brick is down | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> | |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | amukherj, bmekala, bugs, nchilaka, rhs-bugs, storage-qa-internal, vbellur | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | brick-multiplexing | |||
Fixed In Version: | glusterfs-4.0.0 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | 1530217 | |||
: | 1530448 (view as bug list) | Environment: | ||
Last Closed: | 2018-03-15 11:24:53 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1530217, 1530325 | |||
Bug Blocks: | 1530448, 1530449, 1530450 |
Comment 1
Atin Mukherjee
2018-01-02 14:58:21 UTC
REVIEW: https://review.gluster.org/19119 (glusterd: Nullify pmap entry for bricks belonging to same port) posted (#1) for review on master by Atin Mukherjee COMMIT: https://review.gluster.org/19119 committed in master by \"Atin Mukherjee\" <amukherj> with a commit message- glusterd: Nullify pmap entry for bricks belonging to same port Commit 30e0b86 tried to address all the stale port issues glusterd had in case of a brick is abruptly killed. For brick multiplexing case because of a bug the portmap entry was not getting removed. This patch addresses the same. Change-Id: Ib020b967a9b92f1abae9cab9492f0cacec59aaa1 BUG: 1530281 Signed-off-by: Atin Mukherjee <amukherj> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-4.0.0, please open a new bug report. glusterfs-4.0.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-March/000092.html [2] https://www.gluster.org/pipermail/gluster-users/ |