REVIEW: http://review.gluster.org/15913 (gfapi: glfs_subvol_done should not call glfs_lock) posted (#1) for review on master by Rajesh Joseph (rjoseph)
REVIEW: http://review.gluster.org/15913 (gfapi: glfs_subvol_done should not call glfs_lock) posted (#2) for review on master by Rajesh Joseph (rjoseph)
REVIEW: http://review.gluster.org/15913 (gfapi: glfs_subvol_done should not call glfs_lock) posted (#3) for review on master by Rajesh Joseph (rjoseph)
REVIEW: http://review.gluster.org/15913 (gfapi: glfs_subvol_done should not call glfs_lock) posted (#4) for review on master by Rajesh Joseph (rjoseph)
REVIEW: http://review.gluster.org/15913 (gfapi: glfs_subvol_done should NOT wait for graph migration.) posted (#5) for review on master by Rajesh Joseph (rjoseph)
COMMIT: http://review.gluster.org/15913 committed in master by Niels de Vos (ndevos) ------ commit 17d10b42fc4041442e6cd0bfda45944edea498c6 Author: Rajesh Joseph <rjoseph> Date: Tue Nov 22 22:25:42 2016 +0530 gfapi: glfs_subvol_done should NOT wait for graph migration. In graph_setup function glfs_subvol_done is called which is executed in an epoll thread. glfs_lock waits on other thread to finish graph migration. This can lead to dead lock if we consume all the epoll threads. In general any call-back function executed in epoll thread should not call any blocking call which waits on a network reply either directly or indirectly, e.g. syncop functions should not be called in these threads. As a fix we should not wait for migration in the call-back path. Change-Id: If96d0689fe1b4d74631e383048cdc30b01690dc2 BUG: 1397754 Signed-off-by: Rajesh Joseph <rjoseph> Reviewed-on: http://review.gluster.org/15913 NetBSD-regression: NetBSD Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Niels de Vos <ndevos> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/