Description of problem: Do not run scheduler if ovirt scheduler is running Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/10641 (snapshot/scheduler: Do not enable scheduler if another scheduler is running) posted (#1) for review on master by Avra Sengupta (asengupt)
REVIEW: http://review.gluster.org/10641 (snapshot/scheduler: Do not enable scheduler if another scheduler is running) posted (#3) for review on master by Avra Sengupta (asengupt)
REVIEW: http://review.gluster.org/10641 (snapshot/scheduler: Do not enable scheduler if another scheduler is running) posted (#4) for review on master by Avra Sengupta (asengupt)
REVIEW: http://review.gluster.org/10641 (snapshot/scheduler: Do not enable scheduler if another scheduler is running) posted (#5) for review on master by Atin Mukherjee (amukherj)
COMMIT: http://review.gluster.org/10641 committed in master by Kaushal M (kaushal) ------ commit d67eb34b2a5b5e3cb926ff4c86a163148743829c Author: Avra Sengupta <asengupt> Date: Thu May 7 17:50:25 2015 +0530 snapshot/scheduler: Do not enable scheduler if another scheduler is running Check if another snapshot scheduler is running before enabling the scheduler. Also introducing a hidden option, disable_force "snapshot_scheduler.py disable_force" will disable the cli snapshot scheduler from any node, even though the node has not been initialised for the scheduler, as long as the shared storage is mounted This option is hidden, because we don't want to encourage users to use all commands from nodes that are not initialised. Change-Id: I7ad941fbbab834225a36e740c61f8e740813e7c8 BUG: 1219442 Signed-off-by: Avra Sengupta <asengupt> Reviewed-on: http://review.gluster.org/10641 Reviewed-by: Rajesh Joseph <rjoseph> Tested-by: NetBSD Build System Reviewed-by: Kaushal M <kaushal>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user