Bug 1631357 - glusterfsd keeping fd open in index xlator after stop the volume
Summary: glusterfsd keeping fd open in index xlator after stop the volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: All
OS: All
urgent
urgent
Target Milestone: ---
Assignee: Mohit Agrawal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1631356 1631372
TreeView+ depends on / blocked
 
Reported: 2018-09-20 12:07 UTC by Mohit Agrawal
Modified: 2019-03-25 16:30 UTC (History)
6 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1631356
Environment:
Last Closed: 2019-03-25 16:30:43 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Mohit Agrawal 2018-09-20 12:07:06 UTC
+++ This bug was initially created as a clone of Bug #1631356 +++

Description of problem:
glusterfsd keeping fd open in index xlator after stop the volume

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.Enable brick_mux 
2.Create 100 volumes(test1..test100) (1x3) environment
3.Start all the volumes
4.Stop volumes test2..test100
5.After stop the volume check in proc for brick_pid
  ls -lrth /proc/<brick_pid>/fd | grep ".glusterfs"

Actual results:
After stop the volume proc is showing .glusterfs is still consumed
for a brick that is already stopped

Expected results:
No internal directory should be consumed for a stopped brick 

Additional info:

--- Additional comment from Red Hat Bugzilla Rules Engine on 2018-09-20 08:05:40 EDT ---

This bug is automatically being proposed for a Z-stream release of Red Hat Gluster Storage 3 under active development and open for bug fixes, by setting the release flag 'rhgs‑3.4.z' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

Comment 1 Worker Ant 2018-09-20 12:21:25 UTC
REVIEW: https://review.gluster.org/21235 (core: glusterfsd keeping fd open in index xlator after stop the volume) posted (#1) for review on master by MOHIT AGRAWAL

Comment 2 Mohit Agrawal 2018-09-20 12:25:27 UTC
RCA: After getting the termination request for specific brick we do set a 
     child_status flag to false for a specific brick and start to send the 
     disconnect on all xprts associated with that brick. Once server got the 
     notification for all the xprts then it starts to call client_destroy that 
     internally call xlator cbks to release directory opened by any xlator 
     and then call fini for brick xlators to cleanup resources.

     At the time of initiating a connection request server_setvolume 
     also, check the status of child_status but the code was not in sync so 
     sometimes brick was accepting a request after getting a detach request for 
     the same brick.Because the xprt was not added at the time of 
     calculating xprt associated with a brick so resources opened by the 
     client not released and at the time of stopping a brick index directory is 
     still consumed by a brick process.


Regards
Mohit Agrawal

Comment 3 Worker Ant 2018-09-27 03:24:52 UTC
REVIEW: https://review.gluster.org/21284 (core: glusterfsd keeping fd open in index xlator) posted (#1) for review on master by MOHIT AGRAWAL

Comment 4 Worker Ant 2018-10-08 15:46:28 UTC
COMMIT: https://review.gluster.org/21235 committed in master by "Raghavendra G" <rgowdapp> with a commit message- core: glusterfsd keeping fd open in index xlator

Problem: Current resource cleanup sequence is not
         perfect while brick mux is enabled

Solution: 1) Destroying xprt after cleanup all fd associated
             with a client
          2) Before call fini for brick xlators ensure no stub
             should be running on a brick

Change-Id: I86195785e428f57d3ef0da3e4061021fafacd435
fixes: bz#1631357
Signed-off-by: Mohit Agrawal <moagrawal>

Comment 5 Shyamsundar 2019-03-25 16:30:43 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.