Bug 1223203

Summary: [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Avra Sengupta <asengupt>
Component: snapshotAssignee: Avra Sengupta <asengupt>
Status: CLOSED DUPLICATE QA Contact: storage-qa-internal <storage-qa-internal>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, annair, asrivast, rcyriac, rhs-bugs, sasundar, senaik, storage-qa-internal
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: Scheduler
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1218060 Environment:
Last Closed: 2015-06-01 09:02:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1218060, 1224249, 1230018    
Bug Blocks: 1223636    

Description Avra Sengupta 2015-05-20 06:03:27 UTC
+++ This bug was initially created as a clone of Bug #1218060 +++

Description of problem:
=======================
Initialising snap_scheduler from all nodes at the same time should fail with proper error message - "Another snap scheduler command is running"

Version-Release number of selected component (if applicable):
==============================================================
glusterfs 3.7.0beta1 built on May  1 2015

How reproducible:
=================
always

Steps to Reproduce:
===================
1.Create a dist rep volume and mount it.

2.Create another shared storage volume and mount it under /var/run/gluster/shared_storage

3.Initialise snap scheduler at the same time from all nodes 

NOde1: 
~~~~~
snap_scheduler.py init
snap_scheduler: Successfully inited snapshot scheduler for this node

Node2, Node3, Node4 : 
~~~~~~~~~~~~~~~~~~~~
snap_scheduler.py init
Traceback (most recent call last):
  File "/usr/sbin/snap_scheduler.py", line 574, in <module>
    sys.exit(main())
  File "/usr/sbin/snap_scheduler.py", line 544, in main
    os.makedirs(LOCK_FILE_DIR)
  File "/usr/lib64/python2.6/os.py", line 157, in makedirs
    mkdir(name, mode)
OSError: [Errno 17] File exists: '/var/run/gluster/shared_storage/snaps/lock_files/'

It should fail with error : 
snap_scheduler: Another snap_scheduler command is running. Please try again after some time.

Actual results:


Expected results:


Additional info:

--- Additional comment from  on 2015-05-20 01:17:37 EDT ---

Same issue seen while checking snap_scheduler.py status from all nodes at the same time 

Node1:
======
snap_scheduler.py status
snap_scheduler: Snapshot scheduling status: Disabled

Node2,Node3, Node4:
===================
 snap_scheduler.py status
Traceback (most recent call last):
  File "/usr/sbin/snap_scheduler.py", line 575, in <module>
    sys.exit(main())
  File "/usr/sbin/snap_scheduler.py", line 545, in main
    os.makedirs(LOCK_FILE_DIR)
  File "/usr/lib64/python2.6/os.py", line 157, in makedirs
    mkdir(name, mode)
OSError: [Errno 17] File exists: '/var/run/gluster/shared_storage/snaps/lock_files/'

Comment 3 Avra Sengupta 2015-06-01 09:02:07 UTC

*** This bug has been marked as a duplicate of bug 1224249 ***