Bug 1210204 - [SNAPSHOT] - Unable to delete scheduled jobs
Summary: [SNAPSHOT] - Unable to delete scheduled jobs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
URL:
Whiteboard: Scheduler
Depends On:
Blocks: qe_tracker_everglades
TreeView+ depends on / blocked
 
Reported: 2015-04-09 07:40 UTC by senaik
Modified: 2015-09-01 12:23 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.0
Clone Of:
Environment:
Last Closed: 2015-05-15 17:09:34 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description senaik 2015-04-09 07:40:37 UTC
Description of problem:
=======================
Deleting scheduled jobs is failing

Version-Release number of selected component (if applicable):
=============================================================
gluster --version
glusterfs 3.7dev built on Apr  9 2015 01:10:22


How reproducible:
=================
always 


Steps to Reproduce:
===================
1.Create a 6x2 dist-rep volume and start it 
  Enable USS and quota on the volume

2.Fuse and NFS mount the volume and create some IO

3.Create another dist-rep volume – this is the shared storage which will be mounted on all nodes

4. Initialise the snapshot scheduler on all nodes using snap_scheduler.py init

5. Enable the snap scheduler on all nodes using snap_scheduler.py enable

6. Add a job which will create a new snapshot schedule for every 30mins on the volume using snap_scheduler.py add J1 "*/30 * * * * " vol0

7. List the job scheduled
 snap_scheduler.py list
JOB_NAME         SCHEDULE         OPERATION        VOLUME NAME      
--------------------------------------------------------------------
J1               */30 * * * *     Snapshot Create  vol0             

8. Delete the job        
snap_scheduler.py delete J1
Traceback (most recent call last):
  File "/usr/sbin/snap_scheduler.py", line 517, in <module>
    main()
  File "/usr/sbin/snap_scheduler.py", line 502, in main
    perform_operation(args)
  File "/usr/sbin/snap_scheduler.py", line 431, in perform_operation
    ret = syntax_checker(args)
  File "/usr/sbin/snap_scheduler.py", line 348, in syntax_checker
    if (len(args.volname.split()) != 1):
AttributeError: 'Namespace' object has no attribute 'volname'

9. 

Actual results:
==============
Deleting scheduled jobs fails 

Expected results:
=================
Deleting scheduled job should be successful 

Additional info:

Comment 1 Anand Avati 2015-04-09 09:32:33 UTC
REVIEW: http://review.gluster.org/10168 (snapshot/scheduler: Fix deleting of snapshot schedule) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 2 Anand Avati 2015-04-09 12:07:19 UTC
REVIEW: http://review.gluster.org/10168 (snapshot/scheduler: Fix deleting of snapshot schedule) posted (#2) for review on master by Avra Sengupta (asengupt)

Comment 3 Anand Avati 2015-04-10 05:02:52 UTC
COMMIT: http://review.gluster.org/10168 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit 14dcabf21d308b69d0ec0a3ed910953f22e3aed8
Author: Avra Sengupta <asengupt>
Date:   Thu Apr 9 14:24:43 2015 +0530

    snapshot/scheduler: Fix deleting of snapshot schedule
    
    Check if the argument has an attribute before
    validating the attribute.
    
    Change-Id: Ia4c6c91c2fca2ec3e82b47d81fbc19a5e0f17eb4
    BUG: 1210204
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/10168
    Reviewed-by: Aravinda VK <avishwan>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>

Comment 4 Niels de Vos 2015-05-15 17:09:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.