Description of problem: When "snap_scheduler.py status" is disable, and you try to list, delete or edit scheduled jobs, you get error message "snap_scheduler: Failed to edit snapshot schedule. Error: Snapshot scheduling is currently disabled." Version-Release number of selected component (if applicable): [root@localhost upsteam_build]# snap_scheduler.py status snap_scheduler: Snapshot scheduling status: Enabled [root@localhost upsteam_build]# rpm -qa | grep glusterfs glusterfs-fuse-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-rdma-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-libs-3.7dev-0.910.git17827de.el6.x86_64 samba-glusterfs-3.6.509-169.4.el6rhs.x86_64 glusterfs-api-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-geo-replication-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-cli-3.7dev-0.910.git17827de.el6.x86_64 glusterfs-server-3.7dev-0.910.git17827de.el6.x86_64 How reproducible: 100% Steps to Reproduce: 1. Create 2*2 distributed replicate volume 2. create shared storage and mount on each storage node 3. initialize scheduler on each storage node e.g run snap_scheduler.py init command 4. Enable scheduler to storage nodes e.g run snap_scheduler.py enable 5. Add some snapshots jobs e.g snap_scheduler.py add "snap1" "10 10 * * *" "volo" 6. Disable scheduler on storage nodes. e.g run snap_scheduler.py disable 7. Try to list, delete or edit any scheduled job Actual results: Scheduler gives error message snap_scheduler: Failed to edit snapshot schedule. Error: Snapshot scheduling is currently disabled. Expected results: Edit, list and delete operations should be allowed even when scheduler is disabled Additional info: [root@localhost upsteam_build]# gluster v info Volume Name: meta Type: Distributed-Replicate Volume ID: bb7646be-8096-4cb1-a836-6d8c7ef7a075 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick2/mb1 Brick2: 10.70.47.145:/rhs/brick2/mb2 Brick3: 10.70.47.150:/rhs/brick2/mb3 Brick4: 10.70.47.151:/rhs/brick2/mb4 Options Reconfigured: auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256 Volume Name: vol0 Type: Distributed-Replicate Volume ID: 176c3b9a-6f08-434e-bc35-c4ef20dd0bcf Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.47.143:/rhs/brick1/b1 Brick2: 10.70.47.145:/rhs/brick1/b2 Brick3: 10.70.47.150:/rhs/brick1/b3 Brick4: 10.70.47.151:/rhs/brick1/b4 Options Reconfigured: features.barrier: disable auto-delete: disable snap-max-soft-limit: 90 snap-max-hard-limit: 256
REVIEW: http://review.gluster.org/10136 (snapshot/scheduler: Allow add,edit,list,delete of schedules even when snapshot scheduling is disabled.) posted (#1) for review on master by Avra Sengupta (asengupt)
REVIEW: http://review.gluster.org/10136 (snapshot/scheduler: Allow add,edit,list,delete of schedules even when snapshot scheduling is disabled.) posted (#2) for review on master by Niels de Vos (ndevos)
REVIEW: http://review.gluster.org/10136 (snapshot/scheduler: Remove unwanted schedule enable check) posted (#3) for review on master by Rajesh Joseph (rjoseph)
COMMIT: http://review.gluster.org/10136 committed in master by Vijay Bellur (vbellur) ------ commit 1a22f3e10c82847f054fc4b57977e059dab8ac29 Author: Avra Sengupta <asengupt> Date: Mon Apr 6 14:56:21 2015 +0530 snapshot/scheduler: Remove unwanted schedule enable check Allow add, edit, list, delete of schedules even when snapshot scheduling is disabled. Change-Id: Ie55ea7d6e9b3fccd914a786cc54bb323ac765a98 BUG: 1209117 Signed-off-by: Avra Sengupta <asengupt> Reviewed-on: http://review.gluster.org/10136 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Aravinda VK <avishwan> Reviewed-by: Rajesh Joseph <rjoseph> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user