snapview-daemon of a volume is not stopped whenever the volume is stopped. It leads to problems when below steps of operations were performed. 1) Volume was stopped (but snapd did not stop) 2) volume was deleted (old snapd was still running) 3) new volume was created and started and mounted 4) Data created and snapshots were taken (the names and the count of the snapshots were same as previous iteration) 5) uss was enabled (which checks the pid file to check whether snapd is running or not and if running, then it simply returns) 6) Now, when one of the snapshots were entered via .snaps directory, the old snapd tried to communicate with the old snapshot volume (which was deleted as part of snapshot deletion) and got ENOTCONN The fix for this is, when a volume is stopped, the snapd associated with it also should be stopped.
REVIEW: http://review.gluster.org/8076 (mgmt/glusterd: volume stop should also stop its snapview-daemon) posted (#1) for review on master by Raghavendra Bhat (raghavendra)
COMMIT: http://review.gluster.org/8076 committed in master by Vijay Bellur (vbellur) ------ commit 0031bd1d18c874f3b68b59df7f84fce354b9b86c Author: Raghavendra Bhat <raghavendra> Date: Mon Jun 16 16:11:46 2014 +0530 mgmt/glusterd: volume stop should also stop its snapview-daemon Change-Id: I702372c6c8341b54710c531662e3fd738cfb5f9a BUG: 1109770 Signed-off-by: Raghavendra Bhat <raghavendra> Reviewed-on: http://review.gluster.org/8076 Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users