REVIEW: http://review.gluster.org/15977 (gfapi: glfs_subvol_done should NOT wait for graph migration.) posted (#1) for review on release-3.8 by Rajesh Joseph (rjoseph)
COMMIT: http://review.gluster.org/15977 committed in release-3.8 by Niels de Vos (ndevos) ------ commit 1d03b1b6b48e4ee9260852b35ec827f203e2287c Author: Rajesh Joseph <rjoseph> Date: Tue Nov 22 22:25:42 2016 +0530 gfapi: glfs_subvol_done should NOT wait for graph migration. In graph_setup function glfs_subvol_done is called which is executed in an epoll thread. glfs_lock waits on other thread to finish graph migration. This can lead to dead lock if we consume all the epoll threads. In general any call-back function executed in epoll thread should not call any blocking call which waits on a network reply either directly or indirectly, e.g. syncop functions should not be called in these threads. As a fix we should not wait for migration in the call-back path. > Reviewed-on: http://review.gluster.org/15913 > NetBSD-regression: NetBSD Build System <jenkins.org> > Smoke: Gluster Build System <jenkins.org> > Reviewed-by: Niels de Vos <ndevos> > CentOS-regression: Gluster Build System <jenkins.org> (cherry picked from commit 17d10b42fc4041442e6cd0bfda45944edea498c6) Change-Id: If96d0689fe1b4d74631e383048cdc30b01690dc2 BUG: 1399915 Signed-off-by: Rajesh Joseph <rjoseph> Reviewed-on: http://review.gluster.org/15977 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Shyamsundar Ranganathan <srangana> Reviewed-by: Niels de Vos <ndevos>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.8, please open a new bug report. glusterfs-3.8.8 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2017-January/000064.html [2] https://www.gluster.org/pipermail/gluster-users/