+++ This bug was initially created as a clone of Bug #1220348 +++ Description of problem: ================================= Client hung up on listing the files on a perticular directory and not able to list the files in the directory Version-Release number of selected component (if applicable): =============================================================== glusterfs-server-3.7.0beta1-0.69.git1a32479.el6.x86_64 How reproducible: Steps to Reproduce: ====================== 1.Create (4+2) disperse volume and enable bitrot on the volume and mount it on client (FUSE) 2.Using DD create 10k files on mount and while script is running bring down three bricks and after that file creation failed on client 3.Try to list the files of the directory in the client (Evern after re-mount) client hung up Actual results: =================== User should be able to list the list of files and read it Expected results: Additional info: --- Additional comment from Anand Avati on 2015-06-10 02:56:49 EDT --- REVIEW: http://review.gluster.org/11152 (cluster/ec: Wind unlock fops at all cost) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
REVIEW: http://review.gluster.org/11166 (cluster/ec: Wind unlock fops at all cost) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/11166 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) ------ commit d52f95864b9f58d14a9d48a1d73b086009491278 Author: Pranith Kumar K <pkarampu> Date: Wed Jun 10 10:30:03 2015 +0530 cluster/ec: Wind unlock fops at all cost Backport of http://review.gluster.org/11152 Problem: While files are being created if more than redundancy number of bricks go down, then unlock for these fops do not go to the bricks. This will lead to stale locks leading to hangs. Fix: Wind unlock fops at all costs. BUG: 1230350 Change-Id: I3312b0fe1694ad02af5307bcbaf233ac63058846 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/11166 Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report. glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user