Bug 1209408 - [Snapshot] Scheduler should accept only valid crond schedules
Summary: [Snapshot] Scheduler should accept only valid crond schedules
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: qe_tracker_everglades
TreeView+ depends on / blocked
 
Reported: 2015-04-07 10:06 UTC by Anil Shah
Modified: 2015-05-14 17:35 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-14 17:27:15 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Anil Shah 2015-04-07 10:06:40 UTC
Description of problem:

While adding jobs to scheduler for create snapshots, scheduled field accepts invalid crond formats

Version-Release number of selected component (if applicable):

[root@localhost ~]# rpm -qa | grep glusterfs
glusterfs-fuse-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-rdma-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-libs-3.7dev-0.910.git17827de.el6.x86_64
samba-glusterfs-3.6.509-169.4.el6rhs.x86_64
glusterfs-api-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-geo-replication-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-cli-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-server-3.7dev-0.910.git17827de.el6.x86_64

How reproducible:

100&

Steps to Reproduce:

1. Create 6*2 distributed replicate volume
2. create shared storage and mount on each storage node on path /var/run/gluster/snaps/shared_storage
3. initialize scheduler on each storage node e.g run snap_scheduler.py init  command 
4. Enable scheduler on storage nodes e.g run snap_scheduler.py enable 
4  Add jobs to scheduler providing invalid crond for formats . 

e.g snap_scheduler.py add job2 " * * * * * * * * " vol0
e.g snap_scheduler.py add job2 " a b c d e f * " vol0

Actual results:

scheduler accepts invalid crond formats 

[root@localhost ~]# snap_scheduler.py add job2 " * * * * * * * * " vol0
snap_scheduler: Successfully added snapshot schedule
[root@localhost ~]# snap_scheduler.py add job2 " a b c d e f * " vol0
snap_scheduler: job2 already exists in schedule. Use 'edit' to modify job2

Expected results:

scheduler should not accept invalid crond formats 
Syntax check should be done for schedule filed while adding jobs. 

Additional info:

[root@localhost ~]# gluster v info
 
Volume Name: meta
Type: Replicate
Volume ID: ac4dea2d-b500-4666-8a7e-47feac72069c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.70.47.143:/rhs/brick1/meta1
Brick2: 10.70.47.145:/rhs/brick1/meta2
Options Reconfigured:
features.barrier: disable
 
Volume Name: vol0
Type: Distributed-Replicate
Volume ID: cd7621bf-7cb6-4b5f-92ec-e8592fe308ce
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.47.143:/rhs/brick1/b1
Brick2: 10.70.47.145:/rhs/brick1/b2
Brick3: 10.70.47.150:/rhs/brick1/b3
Brick4: 10.70.47.151:/rhs/brick1/b4
Brick5: 10.70.47.143:/rhs/brick2/b5
Brick6: 10.70.47.145:/rhs/brick2/b6
Brick7: 10.70.47.150:/rhs/brick2/b7
Brick8: 10.70.47.151:/rhs/brick2/b8
Brick9: 10.70.47.143:/rhs/brick3/b9
Brick10: 10.70.47.145:/rhs/brick3/10
Brick11: 10.70.47.150:/rhs/brick3/b11
Brick12: 10.70.47.151:/rhs/brick3/b12
Options Reconfigured:
features.barrier: disable

Comment 1 senaik 2015-04-09 06:46:40 UTC
Version:
========
gluster --version
glusterfs 3.7dev built on Apr  9 2015 01:10:22

Currently while adding jobs, when schedule is not provided, it successfully adds it and the job can be listed. 

[root@inception ~]# snap_scheduler.py add "J2" " " "vol0"
snap_scheduler: Successfully added snapshot schedule


[root@inception ~]# snap_scheduler.py list
JOB_NAME         SCHEDULE         OPERATION        VOLUME NAME      
--------------------------------------------------------------------
J1               */30 * * * *     Snapshot Create  vol0             
J2                                Snapshot Create  vol0   

It should error out with message : 
snap_scheduler: Invalid Schedule. Schedule should not be empty and should not contain " " character.

Comment 2 Anand Avati 2015-04-09 09:32:36 UTC
REVIEW: http://review.gluster.org/10169 (snapshot/scheduler: Validate the number of entries in schedule) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 3 Anand Avati 2015-04-09 12:07:21 UTC
REVIEW: http://review.gluster.org/10169 (snapshot/scheduler: Validate the number of entries in schedule) posted (#2) for review on master by Avra Sengupta (asengupt)

Comment 4 Anand Avati 2015-04-10 05:04:30 UTC
COMMIT: http://review.gluster.org/10169 committed in master by Krishnan Parthasarathi (kparthas) 
------
commit 10ed06a5a1ec396bb8fc7cc1fa8182d93bf7dbb5
Author: Avra Sengupta <asengupt>
Date:   Thu Apr 9 14:58:25 2015 +0530

    snapshot/scheduler: Validate the number of entries in schedule
    
    A valid schedule entry in snapshot schedule must have
    six elements and adhere to the following format
    
    * * * * *
    | | | | |
    | | | | +---- Day of the Week   (range: 1-7, 1 standing for Monday)
    | | | +------ Month of the Year (range: 1-12)
    | | +-------- Day of the Month  (range: 1-31)
    | +---------- Hour              (range: 0-23)
    +------------ Minute            (range: 0-59)
    
    Change-Id: Idf03a3c43a461295dd3e2026bbcd0420319dd0e0
    BUG: 1209408
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/10169
    Reviewed-by: Aravinda VK <avishwan>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Tested-by: Krishnan Parthasarathi <kparthas>

Comment 5 Niels de Vos 2015-05-14 17:27:15 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:28:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:35:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.