RCA: Brick process is getting crash at the time of stop the volume because in posix notify we are trying to close dir for mount_lock but dir handle has not been set to NULL after close directory. >>>>>>>>>>>>>>>>>>> f 5 #5 0x00007f3b2d6e1c15 in sys_closedir (dir=<optimized out>) at syscall.c:113 113 return closedir (dir); (gdb) f 6 #6 0x00007f3b1bde9157 in notify (this=<optimized out>, event=<optimized out>, data=<optimized out>) at posix.c:6618 6618 (void) sys_closedir (priv->mount_lock); (gdb) l 6613 if (priv->fsyncer) { 6614 (void) gf_thread_cleanup_xint (priv->fsyncer); 6615 priv->fsyncer = 0; 6616 } 6617 if (priv->mount_lock) 6618 (void) sys_closedir (priv->mount_lock); >>>>>>>>>>>>>>>>>>>>>>>>>>>> Regards Mohit Agrawal
REVIEW: https://review.gluster.org/17767 (posix: brick process crash after stop the volume while brick mux is on) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa)
COMMIT: https://review.gluster.org/17767 committed in master by Jeff Darcy (jeff.us) ------ commit 61db7125a5b8db0bd4dd09b423bb54415c8bd484 Author: Mohit Agrawal <moagrawa> Date: Thu Jul 13 12:23:13 2017 +0530 posix: brick process crash after stop the volume while brick mux is on Problem: sometime brick process is getting crash after stop the volume while brick mux is enabled and no. of volumes are high Solution: In posix notify at the time close mount_lock dir , dir handle needs to set NULL to avoid the reuse of same dir handle. BUG: 1470533 Change-Id: Ifd41c20b3c597317851f91049a7c801949840b16 Signed-off-by: Mohit Agrawal <moagrawa> Reviewed-on: https://review.gluster.org/17767 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Amar Tumballi <amarts> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jeff.us>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report. glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html [2] https://www.gluster.org/pipermail/gluster-users/