Bug 1218060 - [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message
Summary: [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time shoul...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
URL:
Whiteboard: Scheduler
Depends On:
Blocks: qe_tracker_everglades 1223203 1224249 1230018
TreeView+ depends on / blocked
 
Reported: 2015-05-04 07:30 UTC by senaik
Modified: 2016-06-16 12:57 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1223203 1224249 1230018 (view as bug list)
Environment:
Last Closed: 2016-06-16 12:57:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description senaik 2015-05-04 07:30:12 UTC
Description of problem:
=======================
Initialising snap_scheduler from all nodes at the same time should fail with proper error message - "Another snap scheduler command is running"

Version-Release number of selected component (if applicable):
==============================================================
glusterfs 3.7.0beta1 built on May  1 2015

How reproducible:
=================
always

Steps to Reproduce:
===================
1.Create a dist rep volume and mount it.

2.Create another shared storage volume and mount it under /var/run/gluster/shared_storage

3.Initialise snap scheduler at the same time from all nodes 

NOde1: 
~~~~~
snap_scheduler.py init
snap_scheduler: Successfully inited snapshot scheduler for this node

Node2, Node3, Node4 : 
~~~~~~~~~~~~~~~~~~~~
snap_scheduler.py init
Traceback (most recent call last):
  File "/usr/sbin/snap_scheduler.py", line 574, in <module>
    sys.exit(main())
  File "/usr/sbin/snap_scheduler.py", line 544, in main
    os.makedirs(LOCK_FILE_DIR)
  File "/usr/lib64/python2.6/os.py", line 157, in makedirs
    mkdir(name, mode)
OSError: [Errno 17] File exists: '/var/run/gluster/shared_storage/snaps/lock_files/'

It should fail with error : 
snap_scheduler: Another snap_scheduler command is running. Please try again after some time.

Actual results:


Expected results:


Additional info:

Comment 1 senaik 2015-05-20 05:17:37 UTC
Same issue seen while checking snap_scheduler.py status from all nodes at the same time 

Node1:
======
snap_scheduler.py status
snap_scheduler: Snapshot scheduling status: Disabled

Node2,Node3, Node4:
===================
 snap_scheduler.py status
Traceback (most recent call last):
  File "/usr/sbin/snap_scheduler.py", line 575, in <module>
    sys.exit(main())
  File "/usr/sbin/snap_scheduler.py", line 545, in main
    os.makedirs(LOCK_FILE_DIR)
  File "/usr/lib64/python2.6/os.py", line 157, in makedirs
    mkdir(name, mode)
OSError: [Errno 17] File exists: '/var/run/gluster/shared_storage/snaps/lock_files/'

Comment 2 Anand Avati 2015-06-04 11:48:45 UTC
REVIEW: http://review.gluster.org/11087 (snapshot/scheduler: Handle OSError in os. callbacks) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 3 Anand Avati 2015-06-10 06:40:59 UTC
COMMIT: http://review.gluster.org/11087 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit d835219a30327ede60e4ef28210914ab30bd0712
Author: Avra Sengupta <asengupt>
Date:   Thu Jun 4 17:17:13 2015 +0530

    snapshot/scheduler: Handle OSError in os. callbacks
    
    Handle OSError and not IOError in os. callbacks.
    
    Change-Id: I2b5bfb629bacbd2d2e410d96034b4e2c11c4931e
    BUG: 1218060
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/11087
    Tested-by: Gluster Build System <jenkins.com>
    Tested-by: NetBSD Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: Krishnan Parthasarathi <kparthas>

Comment 5 Nagaprasad Sathyanarayana 2015-10-25 14:53:14 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 6 Niels de Vos 2016-06-16 12:57:38 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.