Bug 1508283
Summary: | stale brick processes getting created and volume status shows brick as down(pkill glusterfsd glusterfs ,glusterd restart) | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Atin Mukherjee <amukherj> |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | 3.12 | CC: | amukherj, bmekala, bugs, nchilaka, rhs-bugs, storage-qa-internal, vbellur |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | brick-multiplexing | ||
Fixed In Version: | glusterfs-glusterfs-3.12.3 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1506513 | Environment: | |
Last Closed: | 2017-11-29 05:53:24 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1506513 | ||
Bug Blocks: | 1505363, 1526368 |
Comment 1
Atin Mukherjee
2017-11-01 03:58:23 UTC
REVIEW: https://review.gluster.org/18603 (glusterd: fix brick restart parallelism) posted (#1) for review on release-3.12 by Atin Mukherjee COMMIT: https://review.gluster.org/18603 committed in release-3.12 by ------------- glusterd: fix brick restart parallelism glusterd's brick restart logic is not always sequential as there is atleast three different ways how the bricks are restarted. 1. through friend-sm and glusterd_spawn_daemons () 2. through friend-sm and handling volume quorum action 3. through friend handshaking when there is a mimatch on quorum on friend import. In a brick multiplexing setup, glusterd ended up trying to spawn the same brick process couple of times as almost in fraction of milliseconds two threads hit glusterd_brick_start () because of which glusterd didn't have any choice of rejecting any one of them as for both the case brick start criteria met. As a solution, it'd be better to control this madness by two different flags, one is a boolean called start_triggered which indicates a brick start has been triggered and it continues to be true till a brick dies or killed, the second is a mutex lock to ensure for a particular brick we don't end up getting into glusterd_brick_start () more than once at same point of time. Change-Id: I292f1e58d6971e111725e1baea1fe98b890b43e2 BUG: 1508283 Signed-off-by: Atin Mukherjee <amukherj> (cherry picked from commit 82be66ef8e9e3127d41a4c843daf74c1d8aec4aa) This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.12.3, please open a new bug report. glusterfs-glusterfs-3.12.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-devel/2017-November/053983.html [2] https://www.gluster.org/pipermail/gluster-users/ |