Bug 1654270

Summary: glusterd crashed with seg fault possibly during node reboot while volume creates and deletes were happening
Product: [Community] GlusterFS Reporter: Sanju <srakonde>
Component: glusterdAssignee: Sanju <srakonde>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: amukherj, bmekala, bugs, nchilaka, rhs-bugs, sankarshan, srakonde, storage-qa-internal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1654161 Environment:
Last Closed: 2019-02-19 09:28:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1654161    
Bug Blocks:    

Comment 1 Worker Ant 2018-11-28 11:26:16 UTC
REVIEW: https://review.gluster.org/21743 (glusterd: don't acquire locks on resources when cleanup is already started) posted (#1) for review on master by Sanju Rakonde

Comment 2 Worker Ant 2018-12-03 17:04:35 UTC
REVIEW: https://review.gluster.org/21743 (glusterd: perform rcu_read_lock/unlock() under cleanup_lock mutex) posted (#5) for review on master by Atin Mukherjee

Comment 3 Worker Ant 2019-01-02 07:29:02 UTC
REVIEW: https://review.gluster.org/21974 (glusterd: kill the process without releasing the cleanup mutex lock) posted (#1) for review on master by Sanju Rakonde

Comment 4 Worker Ant 2019-01-02 11:28:10 UTC
REVIEW: https://review.gluster.org/21974 (glusterd: kill the process without releasing the cleanup mutex lock) posted (#1) for review on master by Sanju Rakonde

Comment 5 Worker Ant 2019-02-19 09:28:52 UTC
REVIEW: https://review.gluster.org/22146 (glusterd: adding a comment for code readability) merged (#5) on master by Atin Mukherjee

Comment 6 Shyamsundar 2019-03-25 16:32:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/