Description of problem: volume-snapshot.t is failing spuriously because of having additional test cases which restarts glusterd. Currently glusterd doesn't have a mechanism to indicate that volumes handshaking has been completed or not, due to this even if the peer handshaking finishes and triggering a volume based command might end up in a corruption as volume handshaking is still in progress. Version-Release number of selected component (if applicable): mainline How reproducible: intermittent Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
http://review.gluster.org/11972 posted for review
COMMIT: http://review.gluster.org/11972 committed in master by Raghavendra Talur (rtalur) ------ commit aa1b166984657d53e68a2c4cbd16d2e46c12436b Author: Atin Mukherjee <amukherj> Date: Fri Aug 21 10:54:39 2015 +0530 tests: remove unwanted tests from volume-snapshot.t volume-snapshot.t failspuriously because of having additional test cases which restarts glusterd and they are really not needed as far as the test coverage is concerned. Currently glusterd doesn't have a mechanism to indicate that volumes handshaking has been completed or not, due to this even if the peer handshaking finishes and all the peers are back to the cluster there could be a case where any command which accesses the volume structure might end up in corruption as volume handshaking is still in progress. This is because of volume list is still not been made URCU protected. Change-Id: Id8669c22584384f988be5e0a5a0deca7708a277d BUG: 1255599 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/11972 Reviewed-by: Avra Sengupta <asengupt> Reviewed-by: Raghavendra Talur <rtalur> Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user