Description of problem: Reported by Sakshi Bansal sabansal Parallel rmdir from multiple clients results in application receiving "Transport end point not connected" messages even though there was no network disconnects. Steps to Reproduce: 1. Create 1x2 replica, fuse mount it from 2 clients. 2. Run the script from both clients ------------------------- #!/bin/bash dir=$(dirname $(readlink -f $0)) echo 'Script in '$dir while : do mkdir -p foo$1/bar/gee mkdir -p foo$1/bar/gne mkdir -p foo$1/lna/gme rm -rf foo$1 done -------------------------
REVIEW: http://review.gluster.org/14358 (cluster/afr: Return correct op_errno in pre-op) posted (#1) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/14358 (cluster/afr: Check for required number of entrylks) posted (#2) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/14358 (cluster/afr: Check for required number of entrylks) posted (#3) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/14358 (cluster/afr: Check for required number of entrylks) posted (#4) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/14358 (cluster/afr: Check for required number of entrylks) posted (#6) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/14358 (cluster/afr: Check for required number of entrylks) posted (#7) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/14358 (cluster/afr: Check for required number of entrylks) posted (#8) for review on master by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/14358 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 86a87a2ec0984f450b36ae6414c2d6d66870af73 Author: Ravishankar N <ravishankar> Date: Wed May 18 14:37:46 2016 +0530 cluster/afr: Check for required number of entrylks Problem: Parallel rmdir operations on the same directory results in ENOTCONN messages eventhough there was no network disconnect. In blocking entry lock during rmdir, AFR takes 2 set of locks on all its children-One (parentdir,name of dir to be deleted), the other (full lock on the dir being deleted). We proceed to pre-op stage even if only a single lock (but not all the needed locks) was obtained, only to fail it with ENOTCONN because afr_locked_nodes_get() returns zero nodes in afr_changelog_pre_op(). Fix: After we get replies for all blocking lock requests, if we don't have the minimum number of locks to carry out the FOP, unlock and fail the FOP. The op_errno will be that of the last failed reply we got, i.e. whatever is set in afr_lock_cbk(). Change-Id: Ibef25e65b468ebb5ea6ae1f5121a5f1201072293 BUG: 1336381 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/14358 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Pranith Kumar Karampuri <pkarampu> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report. glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html [2] https://www.gluster.org/pipermail/gluster-users/