Bug 1574305

Summary: rm command hangs in fuse_request_send
Product: [Community] GlusterFS Reporter: Raghavendra G <rgowdapp>
Component: distributeAssignee: Raghavendra G <rgowdapp>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: amukherj, bugs, nbalacha, rgowdapp, rhinduja, rhs-bugs, storage-qa-internal, tdesala
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-v4.1.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1561999
: 1574798 (view as bug list) Environment:
Last Closed: 2018-06-20 18:06:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1561999    
Bug Blocks: 1574798    

Comment 1 Worker Ant 2018-05-03 04:10:07 UTC
REVIEW: https://review.gluster.org/19953 (cluster/dht: unwind if dht_selfheal_dir_mkdir returns an error) posted (#1) for review on master by Raghavendra G

Comment 2 Worker Ant 2018-05-03 11:00:42 UTC
COMMIT: https://review.gluster.org/19953 committed in master by "N Balachandran" <nbalacha> with a commit message- cluster/dht: unwind if dht_selfheal_dir_mkdir returns an error

If dht_selfheal_dir_mkdir returns an error, cbk passed to
dht_selfheal_directory is not invoked. So, Current codepath leaves an
unwound frame resulting in a hung fop forever.

Change-Id: I422308b8a34a074301ca46b029ffe676f5e0f66c
fixes: bz#1574305
Signed-off-by: Raghavendra G <rgowdapp>

Comment 3 Shyamsundar 2018-06-20 18:06:01 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/