+++ This bug was initially created as a clone of Bug #1631356 +++ Description of problem: glusterfsd keeping fd open in index xlator after stop the volume Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1.Enable brick_mux 2.Create 100 volumes(test1..test100) (1x3) environment 3.Start all the volumes 4.Stop volumes test2..test100 5.After stop the volume check in proc for brick_pid ls -lrth /proc/<brick_pid>/fd | grep ".glusterfs" Actual results: After stop the volume proc is showing .glusterfs is still consumed for a brick that is already stopped Expected results: No internal directory should be consumed for a stopped brick Additional info: --- Additional comment from Red Hat Bugzilla Rules Engine on 2018-09-20 08:05:40 EDT --- This bug is automatically being proposed for a Z-stream release of Red Hat Gluster Storage 3 under active development and open for bug fixes, by setting the release flag 'rhgs‑3.4.z' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag.
REVIEW: https://review.gluster.org/21235 (core: glusterfsd keeping fd open in index xlator after stop the volume) posted (#1) for review on master by MOHIT AGRAWAL
RCA: After getting the termination request for specific brick we do set a child_status flag to false for a specific brick and start to send the disconnect on all xprts associated with that brick. Once server got the notification for all the xprts then it starts to call client_destroy that internally call xlator cbks to release directory opened by any xlator and then call fini for brick xlators to cleanup resources. At the time of initiating a connection request server_setvolume also, check the status of child_status but the code was not in sync so sometimes brick was accepting a request after getting a detach request for the same brick.Because the xprt was not added at the time of calculating xprt associated with a brick so resources opened by the client not released and at the time of stopping a brick index directory is still consumed by a brick process. Regards Mohit Agrawal
REVIEW: https://review.gluster.org/21284 (core: glusterfsd keeping fd open in index xlator) posted (#1) for review on master by MOHIT AGRAWAL
COMMIT: https://review.gluster.org/21235 committed in master by "Raghavendra G" <rgowdapp> with a commit message- core: glusterfsd keeping fd open in index xlator Problem: Current resource cleanup sequence is not perfect while brick mux is enabled Solution: 1) Destroying xprt after cleanup all fd associated with a client 2) Before call fini for brick xlators ensure no stub should be running on a brick Change-Id: I86195785e428f57d3ef0da3e4061021fafacd435 fixes: bz#1631357 Signed-off-by: Mohit Agrawal <moagrawal>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/