Description of problem: When snapview-daemon crashes and restarts OR if snapview-daemon in killed by SIGKILL and restarted, then the client process will not be able to communicate with the restarted snapd and access to snapshot world via the entry point directory gets ENOTCONN. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. kill snapd by sending SIGKILL 2. restart snapd by gluster volume start force 3. access entry point directory (by default .snaps) Actual results: snapd restart after crash or SIGKILL, will result in client not able to talk to restarted snapd Expected results: Additional info:
REVIEW: http://review.gluster.org/8084 (mgmt/glusterd: save the snapd port in volinfo after starting snapd) posted (#1) for review on master by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/8084 (mgmt/glusterd: save the snapd port in volinfo after starting snapd) posted (#2) for review on master by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/8084 (mgmt/glusterd: save the snapd port in volinfo after starting snapd) posted (#3) for review on master by Raghavendra Bhat (raghavendra)
COMMIT: http://review.gluster.org/8084 committed in master by Kaushal M (kaushal) ------ commit 53d932b490c505901ddd1a0133e8125ad6dfd24c Author: Raghavendra Bhat <raghavendra> Date: Mon Jun 16 20:38:42 2014 +0530 mgmt/glusterd: save the snapd port in volinfo after starting snapd Change-Id: I9266bbf4f67a2135f9a81b32fe88620be11af6ea BUG: 1109889 Signed-off-by: Raghavendra Bhat <raghavendra> Reviewed-on: http://review.gluster.org/8084 Reviewed-by: Kaushal M <kaushal> Tested-by: Kaushal M <kaushal>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users