Bug 1209120 - [Snapshot] White-spaces are not handled properly in Snapshot scheduler
Summary: [Snapshot] White-spaces are not handled properly in Snapshot scheduler
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: snapshot
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Avra Sengupta
QA Contact:
URL:
Whiteboard: Scheduler
Depends On:
Blocks: qe_tracker_everglades
TreeView+ depends on / blocked
 
Reported: 2015-04-06 10:07 UTC by Anil Shah
Modified: 2015-05-14 17:35 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.0beta1
Clone Of:
Environment:
Last Closed: 2015-05-14 17:27:15 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Anil Shah 2015-04-06 10:07:08 UTC
Description of problem:

While adding jobs to scheduler, white-spaces are not handled properly in add and delete operations.   

Version-Release number of selected component (if applicable):

[root@localhost upsteam_build]# snap_scheduler.py status
snap_scheduler: Snapshot scheduling status: Enabled
[root@localhost upsteam_build]# rpm -qa | grep glusterfs
glusterfs-fuse-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-rdma-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-libs-3.7dev-0.910.git17827de.el6.x86_64
samba-glusterfs-3.6.509-169.4.el6rhs.x86_64
glusterfs-api-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-geo-replication-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-cli-3.7dev-0.910.git17827de.el6.x86_64
glusterfs-server-3.7dev-0.910.git17827de.el6.x86_64

How reproducible:

100%

Steps to Reproduce:

1. Create 2*2 distributed replicate volume
2. create shared storage and mount on each storage node on path /var/run/gluster/snaps/shared_storage
3. initialize scheduler on each storage node e.g run snap_scheduler.py init  command 
4. Enable scheduler to storage nodes e.g run snap_scheduler.py enable 
4 Add jobs to scheduler . e.g snap_scheduler.py add "snap1 vol1 "  "* 11 * * *" vol0

Actual results:

[root@localhost ~]# snap_scheduler.py add "snap1 vol1 "  "* 11 * * *" vol0
snap_scheduler: Successfully added snapshot schedule
[root@localhost ~]# snap_scheduler.py list
JOB_NAME         SCHEDULE         OPERATION        VOLUME NAME      
--------------------------------------------------------------------
snap1            * 11 * * *       Snapshot Create  vol0             
[root@localhost ~]# snap_scheduler.py delete snap1
Traceback (most recent call last):
  File "/usr/sbin/snap_scheduler.py", line 525, in <module>
    main()
  File "/usr/sbin/snap_scheduler.py", line 510, in main
    perform_operation(args)
  File "/usr/sbin/snap_scheduler.py", line 445, in perform_operation
    ret = delete_schedules(args.jobname)
  File "/usr/sbin/snap_scheduler.py", line 287, in delete_schedules
    os.remove(job_lockfile)
OSError: [Errno 2] No such file or directory: '/var/run/gluster/snaps/shared_storage/lock_files/snap1'
[root@localhost ~]# snap_scheduler.py list


Expected results:

White spaces should be handled properly  

Additional info:

Comment 1 Anand Avati 2015-04-06 11:30:56 UTC
REVIEW: http://review.gluster.org/10137 (snapshot/scheduler: Check the correctness of Jobname and Volname) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 2 Anand Avati 2015-04-06 17:22:48 UTC
REVIEW: http://review.gluster.org/10137 (snapshot/scheduler: Check the correctness of Jobname and Volname) posted (#2) for review on master by Niels de Vos (ndevos)

Comment 3 Anand Avati 2015-04-07 05:49:23 UTC
REVIEW: http://review.gluster.org/10137 (snapshot/scheduler: Check the correctness of Jobname and Volname) posted (#3) for review on master by Rajesh Joseph (rjoseph)

Comment 4 Anand Avati 2015-04-07 13:37:38 UTC
COMMIT: http://review.gluster.org/10137 committed in master by Vijay Bellur (vbellur) 
------
commit 6816c7d46630747dd76cdd9ff90eab77e1e4f95c
Author: Avra Sengupta <asengupt>
Date:   Mon Apr 6 14:13:09 2015 +0530

    snapshot/scheduler: Check the correctness of Jobname and Volname
    
    Check for the correctness of Jobname and Volname. They should
    not be empty, and should contain only one word.
    
    If this condition is met, the rest of the whitespaces are
    also striped, before processing the command.
    
    Change-Id: I2c9503ab86456e0f4b37e31d483ee8b2d0b0e1af
    BUG: 1209120
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/10137
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Aravinda VK <avishwan>
    Reviewed-by: Rajesh Joseph <rjoseph>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 5 Niels de Vos 2015-05-14 17:27:15 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:28:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:35:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.