Bug 1614124
| Summary: | glusterfsd process crashed in a multiplexed configuration during cleanup of a single brick-graph triggered by volume-stop. | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Mohit Agrawal <moagrawa> |
| Component: | core | Assignee: | Mohit Agrawal <moagrawa> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | mainline | CC: | amukherj, apaladug, bugs, kdhananj, moagrawa, nchilaka, rhinduja, rhs-bugs, sankarshan, sheggodu, storage-qa-internal, vdas |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-5.0 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1608352 | Environment: | |
| Last Closed: | 2018-10-23 15:16:35 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1608352 | ||
|
Comment 1
Worker Ant
2018-08-09 03:11:24 UTC
COMMIT: https://review.gluster.org/20657 committed in master by "Atin Mukherjee" <amukherj> with a commit message- core: Update condition in get_xlator_by_name_or_type Problem: Sometimes client connection is failed after throwing error "cleanup flag is set for xlator Try again later". The situation comes only after get a detach request but the brick stack is not completely detached and at the same time the client initiates a connection with brick Solution: To resolve the same check cleanup_starting flag in get xlator_by_name_or_type, this function call by server_setvolume to attach a client with brick. Change-Id: I3720e42642fe495dd05211e2ed2cb976db9e231b fixes: bz#1614124 Signed-off-by: Mohit Agrawal <moagrawal> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/ |