+++ This bug was initially created as a clone of Bug #1218060 +++ Description of problem: ======================= Initialising snap_scheduler from all nodes at the same time should fail with proper error message - "Another snap scheduler command is running" Version-Release number of selected component (if applicable): ============================================================== glusterfs 3.7.0beta1 built on May 1 2015 How reproducible: ================= always Steps to Reproduce: =================== 1.Create a dist rep volume and mount it. 2.Create another shared storage volume and mount it under /var/run/gluster/shared_storage 3.Initialise snap scheduler at the same time from all nodes NOde1: ~~~~~ snap_scheduler.py init snap_scheduler: Successfully inited snapshot scheduler for this node Node2, Node3, Node4 : ~~~~~~~~~~~~~~~~~~~~ snap_scheduler.py init Traceback (most recent call last): File "/usr/sbin/snap_scheduler.py", line 574, in <module> sys.exit(main()) File "/usr/sbin/snap_scheduler.py", line 544, in main os.makedirs(LOCK_FILE_DIR) File "/usr/lib64/python2.6/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 17] File exists: '/var/run/gluster/shared_storage/snaps/lock_files/' It should fail with error : snap_scheduler: Another snap_scheduler command is running. Please try again after some time. Actual results: Expected results: Additional info: --- Additional comment from on 2015-05-20 01:17:37 EDT --- Same issue seen while checking snap_scheduler.py status from all nodes at the same time Node1: ====== snap_scheduler.py status snap_scheduler: Snapshot scheduling status: Disabled Node2,Node3, Node4: =================== snap_scheduler.py status Traceback (most recent call last): File "/usr/sbin/snap_scheduler.py", line 575, in <module> sys.exit(main()) File "/usr/sbin/snap_scheduler.py", line 545, in main os.makedirs(LOCK_FILE_DIR) File "/usr/lib64/python2.6/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 17] File exists: '/var/run/gluster/shared_storage/snaps/lock_files/' --- Additional comment from Anand Avati on 2015-06-04 07:48:45 EDT --- REVIEW: http://review.gluster.org/11087 (snapshot/scheduler: Handle OSError in os. callbacks) posted (#1) for review on master by Avra Sengupta (asengupt) --- Additional comment from Anand Avati on 2015-06-10 02:40:59 EDT --- COMMIT: http://review.gluster.org/11087 committed in master by Krishnan Parthasarathi (kparthas) ------ commit d835219a30327ede60e4ef28210914ab30bd0712 Author: Avra Sengupta <asengupt> Date: Thu Jun 4 17:17:13 2015 +0530 snapshot/scheduler: Handle OSError in os. callbacks Handle OSError and not IOError in os. callbacks. Change-Id: I2b5bfb629bacbd2d2e410d96034b4e2c11c4931e BUG: 1218060 Signed-off-by: Avra Sengupta <asengupt> Reviewed-on: http://review.gluster.org/11087 Tested-by: Gluster Build System <jenkins.com> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Aravinda VK <avishwan> Reviewed-by: Krishnan Parthasarathi <kparthas>
REVIEW: http://review.gluster.org/11151 (snapshot/scheduler: Handle OSError in os. callbacks) posted (#1) for review on release-3.7 by Avra Sengupta (asengupt)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report. glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user