+++ This bug was initially created as a clone of Bug #1220348 +++
Description of problem:
Client hung up on listing the files on a perticular directory and not able to list the files in the directory
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.Create (4+2) disperse volume and enable bitrot on the volume and mount it on client (FUSE)
2.Using DD create 10k files on mount and while script is running bring down three bricks and after that file creation failed on client
3.Try to list the files of the directory in the client (Evern after re-mount) client hung up
User should be able to list the list of files and read it
--- Additional comment from Anand Avati on 2015-06-10 02:56:49 EDT ---
REVIEW: http://review.gluster.org/11152 (cluster/ec: Wind unlock fops at all cost) posted (#1) for review on master by Pranith Kumar Karampuri (email@example.com)
REVIEW: http://review.gluster.org/11166 (cluster/ec: Wind unlock fops at all cost) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri (firstname.lastname@example.org)
COMMIT: http://review.gluster.org/11166 committed in release-3.7 by Pranith Kumar Karampuri (email@example.com)
Author: Pranith Kumar K <firstname.lastname@example.org>
Date: Wed Jun 10 10:30:03 2015 +0530
cluster/ec: Wind unlock fops at all cost
Backport of http://review.gluster.org/11152
While files are being created if more than redundancy number of bricks
go down, then unlock for these fops do not go to the bricks. This will
lead to stale locks leading to hangs.
Wind unlock fops at all costs.
Signed-off-by: Pranith Kumar K <email@example.com>
Tested-by: Gluster Build System <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report.
glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.