Bug 1631357
Summary: | glusterfsd keeping fd open in index xlator after stop the volume | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Mohit Agrawal <moagrawa> |
Component: | core | Assignee: | Mohit Agrawal <moagrawa> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | mainline | CC: | amukherj, bugs, rhinduja, rhs-bugs, sankarshan, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-6.0 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1631356 | Environment: | |
Last Closed: | 2019-03-25 16:30:43 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1631356, 1631372 |
Description
Mohit Agrawal
2018-09-20 12:07:06 UTC
REVIEW: https://review.gluster.org/21235 (core: glusterfsd keeping fd open in index xlator after stop the volume) posted (#1) for review on master by MOHIT AGRAWAL RCA: After getting the termination request for specific brick we do set a child_status flag to false for a specific brick and start to send the disconnect on all xprts associated with that brick. Once server got the notification for all the xprts then it starts to call client_destroy that internally call xlator cbks to release directory opened by any xlator and then call fini for brick xlators to cleanup resources. At the time of initiating a connection request server_setvolume also, check the status of child_status but the code was not in sync so sometimes brick was accepting a request after getting a detach request for the same brick.Because the xprt was not added at the time of calculating xprt associated with a brick so resources opened by the client not released and at the time of stopping a brick index directory is still consumed by a brick process. Regards Mohit Agrawal REVIEW: https://review.gluster.org/21284 (core: glusterfsd keeping fd open in index xlator) posted (#1) for review on master by MOHIT AGRAWAL COMMIT: https://review.gluster.org/21235 committed in master by "Raghavendra G" <rgowdapp> with a commit message- core: glusterfsd keeping fd open in index xlator Problem: Current resource cleanup sequence is not perfect while brick mux is enabled Solution: 1) Destroying xprt after cleanup all fd associated with a client 2) Before call fini for brick xlators ensure no stub should be running on a brick Change-Id: I86195785e428f57d3ef0da3e4061021fafacd435 fixes: bz#1631357 Signed-off-by: Mohit Agrawal <moagrawal> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ |