+++ This bug was initially created as a clone of Bug #1218060 +++ Description of problem: ======================= Initialising snap_scheduler from all nodes at the same time should fail with proper error message - "Another snap scheduler command is running" Version-Release number of selected component (if applicable): ============================================================== glusterfs 3.7.0beta1 built on May 1 2015 How reproducible: ================= always Steps to Reproduce: =================== 1.Create a dist rep volume and mount it. 2.Create another shared storage volume and mount it under /var/run/gluster/shared_storage 3.Initialise snap scheduler at the same time from all nodes NOde1: ~~~~~ snap_scheduler.py init snap_scheduler: Successfully inited snapshot scheduler for this node Node2, Node3, Node4 : ~~~~~~~~~~~~~~~~~~~~ snap_scheduler.py init Traceback (most recent call last): File "/usr/sbin/snap_scheduler.py", line 574, in <module> sys.exit(main()) File "/usr/sbin/snap_scheduler.py", line 544, in main os.makedirs(LOCK_FILE_DIR) File "/usr/lib64/python2.6/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 17] File exists: '/var/run/gluster/shared_storage/snaps/lock_files/' It should fail with error : snap_scheduler: Another snap_scheduler command is running. Please try again after some time. Actual results: Expected results: Additional info: --- Additional comment from on 2015-05-20 01:17:37 EDT --- Same issue seen while checking snap_scheduler.py status from all nodes at the same time Node1: ====== snap_scheduler.py status snap_scheduler: Snapshot scheduling status: Disabled Node2,Node3, Node4: =================== snap_scheduler.py status Traceback (most recent call last): File "/usr/sbin/snap_scheduler.py", line 575, in <module> sys.exit(main()) File "/usr/sbin/snap_scheduler.py", line 545, in main os.makedirs(LOCK_FILE_DIR) File "/usr/lib64/python2.6/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 17] File exists: '/var/run/gluster/shared_storage/snaps/lock_files/'
*** This bug has been marked as a duplicate of bug 1224249 ***