Description of problem: when executing testcases manually, some time we may want to terminate the testcase execution in between due to various reasons. Existing testcase flow has no mechanism to call cleanup before they terminate abnormally, hence we endup with volume setups and mount points uncleaned. Version-Release number of selected component (if applicable): mainline How reproducible: call Ctrl+C while executing testcases manually and check volume status Steps to Reproduce: 1. ./tests/sometestcase 2. call ctrl+C 3. gluster vol status/info Actual results: Some resident volume info (say patchy) Expected results: clean setup Additional info:
REVIEW: http://review.gluster.org/11882 (tests: call cleanup on receiving external signals INT, TERM and HUP) posted (#1) for review on master by Prasanna Kumar Kalever
COMMIT: http://review.gluster.org/11882 committed in master by Raghavendra Talur (rtalur) ------ commit db4e3a371c66c400b3cb95d4e7701625bef4ac95 Author: Prasanna Kumar Kalever <prasanna.kalever> Date: Tue Aug 11 13:45:26 2015 +0530 tests: call cleanup on receiving external signals INT, TERM and HUP problem: when executing testcases manually, some time we may want to terminate the testcase execution in between due to various reasons. Existing testcase flow has no mechanism to call cleanup before they terminate abnormally, hence we endup with volume setups and mount points uncleaned. Solution: This patch traps such kind of abnormal terminations and calls 'cleanup' function soon after they are caught and then terminates the testcases with appropriate status.. $ ./tests/basic/mount-nfs-auth.t 1..87 ========================= TEST 1 (line 8): glusterd ok 1 RESULT 1: 0 ========================= TEST 2 (line 9): pidof glusterd ok 2 RESULT 2: 0 ========================= TEST 3 (line 10): gluster -mode=script --wignore volume info No volumes present ok 3 RESULT 3: 0 ^C received external signal --INT--, calling 'cleanup' ... $ glusterd && gluster vol status No volumes present Change-Id: Ia51a850c356e599b8b789cec22b9bb5e87e1548a BUG: 1252374 Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever> Reviewed-on: http://review.gluster.org/11882 Reviewed-by: Niels de Vos <ndevos> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Raghavendra Talur <rtalur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user