Bug 1470533 - Brick Mux Setup: brick processes(glusterfsd) crash after a restart of volume which was preceded with some actions
Brick Mux Setup: brick processes(glusterfsd) crash after a restart of volume ...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
mainline
Unspecified Unspecified
high Severity urgent
: ---
: ---
Assigned To: Mohit Agrawal
brick-multiplexing
:
Depends On:
Blocks: 1468514
  Show dependency treegraph
 
Reported: 2017-07-13 02:43 EDT by Mohit Agrawal
Modified: 2017-09-05 13:36 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.12.0
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1468514
Environment:
Last Closed: 2017-09-05 13:36:59 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 1 Mohit Agrawal 2017-07-13 02:49:43 EDT
RCA: Brick process is getting crash at the time of stop the volume because in posix notify 
     we are trying to close dir for mount_lock but dir handle has not been set 
     to NULL after close directory.

>>>>>>>>>>>>>>>>>>>
      f 5
#5  0x00007f3b2d6e1c15 in sys_closedir (dir=<optimized out>) at syscall.c:113
113	        return closedir (dir);
(gdb) f 6
#6  0x00007f3b1bde9157 in notify (this=<optimized out>, event=<optimized out>, 
    data=<optimized out>) at posix.c:6618
6618	                        (void) sys_closedir (priv->mount_lock);
(gdb) l
6613	                if (priv->fsyncer) {
6614	                        (void) gf_thread_cleanup_xint (priv->fsyncer);
6615	                        priv->fsyncer = 0;
6616	                }
6617	                if (priv->mount_lock)
6618	                        (void) sys_closedir (priv->mount_lock);


>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Regards
Mohit Agrawal
Comment 2 Worker Ant 2017-07-13 02:58:44 EDT
REVIEW: https://review.gluster.org/17767 (posix: brick process crash after stop the volume while brick mux is on) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa@redhat.com)
Comment 3 Worker Ant 2017-07-13 08:39:34 EDT
COMMIT: https://review.gluster.org/17767 committed in master by Jeff Darcy (jeff@pl.atyp.us) 
------
commit 61db7125a5b8db0bd4dd09b423bb54415c8bd484
Author: Mohit Agrawal <moagrawa@redhat.com>
Date:   Thu Jul 13 12:23:13 2017 +0530

    posix: brick process crash after stop the volume while brick mux is on
    
    Problem: sometime brick process is getting crash after stop the volume
             while brick mux is enabled and no. of volumes are high
    
    Solution: In posix notify at the time close mount_lock dir , dir handle
              needs to set NULL to avoid the reuse of same dir handle.
    
    BUG: 1470533
    Change-Id: Ifd41c20b3c597317851f91049a7c801949840b16
    Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
    Reviewed-on: https://review.gluster.org/17767
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Amar Tumballi <amarts@redhat.com>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Comment 4 Shyamsundar 2017-09-05 13:36:59 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report.

glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.